AI Data Press | Powered by EnterpriseDB © 2025
Chris Pace, VP of Healthcare Industry at SearchStax, explains the healthcare industry's transition from AI hype to a disciplined approach focused on empathy, privacy, and trust.
He describes how clear AI use cases can help healthcare organizations strike a better balance between innovation and risk, but only in secure environments.
Pace cautions that the mismanagement of AI in healthcare can lead to loss of trust and market share, concluding that formal AI governance and human oversight are essential to prevent data exposure in healthcare settings.
In healthcare, promises of AI efficiency are colliding with the non-negotiable demands of empathy, privacy, and trust. As the industry transitions from hype to disciplined adoption, many are beginning to find more value in the boundaries placed around AI than in the technology itself. Now, the new mandate for healthcare marketing leaders is becoming clear: be disciplined, not dazzled.
Chris Pace, VP of Healthcare Industry at SearchStax, has been on the front lines of this tension for over 15 years. With a background in strategic marketing and digital experience, his expertise is backed by senior leadership roles at major U.S. healthcare systems like Banner Health and Dignity Health. For Pace, the solution lies in a framework of clear AI use cases that act as guardrails between innovation and risk.
Context is everything: Pace draws a hard line between public-facing marketing and secure patient environments. "Private data should never be used in a public-facing setting, ever. But in a secure, logged-in experience, there's an opportunity. In EHRs, for example, AI can take highly clinical information and create an easy-to-understand narrative. That adds value, shows empathy, and builds trust. That data should never be in a blog or email because it's private. It's all about where the data lives and its end use case."
In this secure context, AI can become a tool for building empathy, a practice already in use at large healthcare institutions. The same principle extends to the clinical frontier, Pace explains. Even the healthcare organizations navigating high-tech, high-touch situations are always asking the same profound ethical question: How do you balance what's right for the patient versus what's right for the practice of medical care?
Threat and opportunity: A changing digital environment is forcing the issue, according to Pace. "With generative AI querying so many searches in the health care space, it creates a "zero-click environment" where fewer, more qualified visitors arrive on a health system's website. On the other hand, AI can also be an efficiency lever for marketing teams doing more with less. It connects people to transactions a lot quicker, with less labor. And, it unifies siloed analytics for complete visibility into the patient journey."
Unknown unknowns: AI-powered site search can help users find what they're looking for, Pace explains. But it can also help them identify what they were failing to see, exposing content gaps that were previously invisible. "The huge benefit of AI is its efficiency in discovering the things you don't know you don't know."
In healthcare, however, speed without empathy can be a disaster. Misjudge this balance, and you risk your credibility and your market share, Pace cautions. For instance, Google heavily scrutinizes healthcare content under its Your Money or Your Life standard, penalizing untrustworthy sources. Worse, unbridled generative AI can hallucinate results and expose protected health information (PHI). Eroding trust, whether from a data breach or a lack of empathy, will send patients directly to competitors, he explains.
The trust transaction: In a high-stakes field like healthcare, both patients and search engines are quick to penalize content that lacks a credible, human touch, according to Pace. "If you overuse AI tools to develop content, people will notice. Google will certainly notice. There are signals and tells that reveal a lack of human touch."
The price of power: Also concerning is the rise of sophisticated disinformation, something Pace sees as a significant societal risk. "The scale of disinformation and the difficulty for a human to differentiate 'Is this real or is this artificial?' That's the negative consequence. And that balance is only going to become harder to find as the technology gets better," he warns.
With a lack of AI governance creating risks, formal governance committees are essential, Pace continues. For example, he recounts his experience at two large health systems, where a "golden record" of a customer could be viewed horizontally, with strict governance dictating what data is exposed to which channel and when. This type of system could prevent data from a patient's insurance plan from being used in clinical care communications, he explains.
Human in the loop: To counter the risks of data exposure and eroding trust, Pace sees formal human oversight as a requirement. "With any AI tool, you must have significant human oversight. That's the role of AI governance committees: to ensure there's always a person watching the machine."
Permission over pardon: Pace also stresses that marketing teams must proactively collaborate with legal and privacy departments before implementing any new tool. "Have the conversation with legal and privacy upfront. You don't want to ask for forgiveness later with this technology. Because forgiveness can mean legal issues or job loss."
The conversation reflects a broader trend among large, academically focused organizations, Pace continues. At the Mayo Clinic and Johns Hopkins, teams are already using AI to cure diseases and accelerate research to market. "In our lifetime, AI's hyperscaling algorithms could help us see an end to Parkinson's and Alzheimer's. It could even help us cure certain cancers," Pace says.
Ultimately, Pace concludes that the actual value of technology should be measured by a more immediate metric: its ability to connect people to the care they need. "At the end of the day, most healthcare marketers just want to help people get better. That's why we do this. Technology will allow us to get the right information to the right consumer at the exact right time, more often."