All articles

Leadership

A Chief Analytics Officer's Advice for Navigating AI's New Frontiers Without Sparking a PR Crisis

AI Data Press - News Team
|
October 3, 2025
Credit: Douglas Rissing (edited)

Key Points

  • A recent controversy over misinterpreted claims of AI-driven pricing at Delta Air Lines shows how quickly a false narrative can become a business threat.

  • Bill Franks, Director of Kennesaw State University’s Center for Data Science & Analytics, explains that the damage came from perception, not reality, as the rumored actions weren't even legal.

  • He cautions that such crises are often self-inflicted wounds caused by a knowledge gap between leaders and subject matter experts.

  • Franks advises that navigating the AI frontier requires radical transparency and a risk model that considers not just legality but customer acceptance.

The real damage wasn't done by what the AI did. It's what people thought it did. You need to be really careful in what you are stating, why you are stating it, and then making sure you are saying it in a way that people will understand without room to extrapolate or misunderstand.

Bill Franks

Director of Data Science & Analytics
Kennesaw State University

Bill Franks

Director of Data Science & Analytics
Kennesaw State University

Rumors spread fast, and AI can spread misinformation even faster. A recent controversy involving Delta Air Lines and claims of AI-driven personalized pricing—which the company publicly denied—shows how fast a story can become a business threat, even when false. For leaders, today's challenge is not just about governing the tech, but about owning the story and perception of their brand.

It’s a problem familiar to Bill Franks, an analytics veteran who was the first-ever Chief Analytics Officer for both Teradata and the International Institute for Analytics. Now the Director of Kennesaw State University’s Center for Data Science & Analytics, Franks is an author and globally recognized expert on data science and ethics. In his view, while the technology is new, the communication pitfalls are classic and, more importantly, avoidable.

Delta's incident is a textbook case of perception-first crisis. “The real damage wasn't done by what the AI did,” he agrees. “It's what people thought it did. You need to be really careful in what you are stating, why you are stating it, and then making sure you are saying it in a way that people will understand without room to extrapolate or misunderstand.” The initial rumors were so compelling that even he was momentarily misled. “I fell for it myself,” Franks admits. “Here I am in this industry, and I believed what I read without digging deeper. My first thought was, ‘I don't know how they can do this legally.’ I don't think the laws would even allow it today."

  • Paved with good intentions: He was right, as Delta later clarified its pricing strategy and compliance. But the incident shows how fast a misunderstanding can spin into a real business problem. In Franks’ view, it’s often a self-inflicted wound. “Most of the unethical things I've seen arise are innocent in intention. Someone just didn't think something through or didn't realize how something would work downstream. It's almost always a mistake, but by rushing, you increase the chance of those mistakes happening,” he says.

The root cause is a knowledge gap. “The reality is that the various stakeholders putting together public statements are often not the subject matter experts," he explains. That gap creates a risk of ambiguous messaging that can be easily misinterpreted. When he was a Chief Analytics Officer, Franks's CEO would consult him before earnings calls to make sure he had the details right on analytics-related topics. “It was a risk mitigator because our CEO knew this wasn't his area of expertise. The deep data and analytics were not his background. Recognizing that, he reached out for a sanity check,” Franks says.

  • Welcome to the frontier: With AI, Franks says leaders must operate with a new level of caution, because the pace of change itself has created a unique challenge. “The problem I've seen with generative AI is that it's evolving so fast, it's outpacing our ability to ideate on it. I will see a press release and be genuinely surprised, not realizing the technology was already that advanced. When you're on the frontier of technology, you have to be extra cautious,” he warns.

  • Acceptance level: Navigating this frontier often requires a more sophisticated risk model than simple legal compliance. Franks points to the famous case where Target, using analytics, correctly identified a teenage girl’s pregnancy and sent her coupons for baby items, shocking her unsuspecting father. The incident provides a three-tiered framework for evaluating risk. “There’s no doubt that what they did was 100% legal, that's the first layer” Franks states. “At the next level, it's hard to argue it was unethical, because it's a loyalty program. But the tightest level is what people will accept. It doesn't matter if it was legal and probably ethical to predict pregnancy. Customers didn't like it. It was too personal. And they failed on that front.” Unfortunately, the Target employee who excitedly described the project to a reporter over lunch, sparking the ensuing controversy, was a sincere, junior-level analyst with no media training. “He had no idea. It was a painful lesson in the risks of leaving communication to the unprepared," Franks cautions.

In this environment, he believes the best defense is radical transparency. Franks says it's not about universal approval, but clarity. Citing Apple’s refusal to unlock an iPhone for the FBI, the decision was polarizing, but it gave customers a clear understanding of the company’s principles. “No matter what you put out, someone's going to be upset, and someone else is going to be happy. In the end, you have to trust that if you're being open about it, the people who share those values will know what you're doing. For every customer that you lose, you'll gain another,” he says.

Successfully navigating the AI frontier demands that leaders govern the technology, but also proactively manage the story around it. Missteps seem all but inevitable, but the companies that embed clarity, humility, and expert oversight into their communications will be the ones best equipped to handle the fallout.