AI is compressing the distance between a scientific idea and a testable result from months to minutes. For computational researchers, code-based AI assistants now connect directly to compute clusters, pull external datasets, run complex analyses, and return findings in a single session. The shift turns data analysis into an almost real-time layer on top of existing infrastructure. But the speed introduces a critical dependency: without rigorous verification baked into every step, the same tools that accelerate discovery can scale flawed conclusions just as fast.
Olivier Elemento is the Director of the Caryl and Israel Englander Institute for Precision Medicine and Associate Director of the Institute for Computational Biomedicine at Weill Cornell Medicine. His research group combines big data, AI, and genomic profiling to accelerate the discovery of cancer treatments, with more than 450 published scientific papers. He also led the development of New York State's first approved whole-exome sequencing test for oncology. From his vantage point, the structural change underway in biomedical research is unlike anything he has seen in decades of scientific work.
"I've rarely been able to do as much research as I've been able to do in the past few months in my entire scientific life. I'm throwing the hardest problems at it, and it seems like there's nothing it can't do. It's a revolution for dry lab research," says Elemento. Where research once followed a single branch of inquiry at a time, Elemento describes a new model where scientists traverse an entire tree of hypotheses simultaneously. The bottleneck of training a new student, waiting for results, and iterating over months collapses into a direct feedback loop between researcher and AI.
Query and run: The transformation is most pronounced in computational work. "What's changing isn't just speed. It's that AI is turning data analysis into an almost 'query-and-run' layer directly on top of compute systems and large datasets," Elemento says. "I don't even have to log into a cluster. It just logs there and runs analysis and gets results back."
A tree, not a line: That speed enables a fundamentally different research topology. "Instead of testing one hypothesis at a time, you can test many. I think of research as a tree of exploration. It used to be that you could follow just a few branches, but now you can broadly investigate multiple directions at the same time."
But the acceleration carries a systemic risk that other AI practitioners have flagged. AI tools tend to present results with high confidence regardless of accuracy, and verification is not yet a default behavior. "The biggest issue I see is verification. These tools make it look like everything is always working, and sometimes they make mistakes," Elemento warns. "I'm concerned that we're going to see investigations that are flawed because of a lack of this verification step."
Agent-checked science: To counter the risk, Elemento builds layered validation directly into his workflow. When drafting papers, for example, he deploys sub-agents to independently verify every reference against PubMed. "If you don't do that, I can guarantee that hallucinations will occur. But the verification step, even an AI-based one, can easily catch those things right now."
A psychological adjustment: The deeper challenge is cultural. The role of the human scientist shifts from hands-on executor to a verification and oversight layer, constantly switching context across a broader surface area of results. "These AI models tend to be very optimistic about their own work. They can make you feel like a result is very high confidence and reliable while in fact it's not. That's a psychological adjustment."
Despite the risks, Elemento sees reason for optimism on one of science's most persistent problems: reproducibility. Because AI tools maintain a complete log of every prompt, analysis, and result, they create a built-in record that is harder to selectively edit than a traditional lab notebook. He also envisions AI systems building large-scale databases of negative results, a resource that has never existed in science because journals rarely publish failed experiments.
Looking further ahead, Elemento predicts the concept of the scientific publication itself will evolve. As AI makes analysis nearly free, the value shifts decisively toward the data. He envisions a future where researchers exchange computational models rather than static papers, and where the uniqueness of a scientific group is defined by the data it can produce, not the analyses it can run.
"The real value is shifting toward the data itself," Elemento concludes. "Once data is accessible, AI makes analysis and insight generation effectively close to free. We may need to rethink scientific publication entirely. It could become an exchange of models, not just an exchange of papers."