There’s a persistent myth that pharma’s biggest AI risk is moving too fast. Deploying unvalidated models. Cutting corners on compliance. Rushing a technology into regulated processes before it’s ready.
The real risk is the opposite: not deciding is a decision too — and it has consequences.
If you can’t decide today whether to introduce a specific AI system, you’ve effectively decided not to introduce it. And that non-decision carries its own cost — in lost competitive ground, in talent that leaves for companies that move faster, in discovery timelines that stay at 15 years instead of shrinking to 7.
The pharma companies that will define the next decade aren’t the ones who waited for perfect certainty. They’re the ones who built the organizational muscle to decide, execute, learn, and recalibrate — fast.
No Five-Year Plans. A North Star Instead.
Every large enterprise loves a strategy deck. A five-year roadmap. Milestones. Deadlines where — as the old consulting joke goes — “someone has to be dead.”
That model is broken for AI.
When a new foundation model drops every two weeks, when a hyperscaler launches new products that invalidate your tooling decision from three weeks ago — a rigid five-year plan isn’t a strategy. It’s a liability.
What works instead: a directional North Star — 2 to 3 years out — that’s explicit about ambition but not cast in stone. Are you a leader, a fast follower, or a wait-and-see player? That choice cascades into everything: investment priorities, build-vs-partner decisions, talent strategy, organizational speed.
Bayer chose to lead. They’re consistently in the top 3 of pharma AI readiness rankings — and they’re aiming for number one. Not for the sake of AI, but because they believe AI fundamentally accelerates their mission: getting medicines to patients and products to farmers faster.
The key insight: a decision made with the information available at the time is a good decision — even if new information would lead to a different one today. Recalibrating isn’t failure. It’s the system working as intended.
Freedom in a Frame
One of the most underrated challenges in enterprise AI isn’t the technology. It’s the governance model.
Lock everything down and you kill adoption. Open everything up and you create compliance nightmares. The emerging best practice: “Freedom in a Frame” — a governance approach that defines clear boundaries (ethical guidelines, regulatory requirements, responsible AI principles) while giving people genuine room to explore, build, and deploy within those boundaries.
This isn’t a metaphor. It’s an operating model. Leading pharma companies are building cross-functional squads — not tied to a single reporting line — covering AI communication, training, upskilling, and responsible AI. Bayer’s Tech Lead described how, when the shift to agentic AI required a new Responsible AI scope, their squad was reconfigured in a month. Not a quarter. Not a budget cycle. A month.
That organizational speed is a competitive advantage most companies underestimate. If your company needs a year to secure budget and headcount for a new AI initiative, you’re not just slow — you’re structurally incapable of keeping pace with the technology.
Democratization at Scale
The companies winning at AI aren’t the ones with the best models. They’re the ones where the most people actually use them.
Bayer’s internal GenAI community has grown to 14,000 members. Monthly community calls draw 1,500 to 2,000 participants where employees present their own use cases. This isn’t a top-down mandate. It’s a bottom-up movement enabled by infrastructure and culture.
Their internal platform, myGenAssist, serves over 40,000 employees with customized AI assistants connected to internal and external data sources. Bayer’s Tech Lead described how she builds personal assistants for different domains — loading them with relevant PowerPoints, market research, templates — to rapidly context-switch between topics and even challenge her own thinking against the assistant’s synthesized knowledge.
The results speak for themselves: in one case, a team used AI to narrow millions of potential molecular candidates down to 250 viable options in two weeks — work that wouldn’t have been possible at all without AI, let alone at that speed.
This is the unlock that matters. Not “we saved X hours” — but “we did things we literally couldn’t have done before.”
What the Leaders Are Actually Building
This democratization and speed doesn’t happen in a vacuum. The largest pharma companies are placing massive, concrete bets on AI infrastructure — and they’re doing it with technology that actually works at scale.
Eli Lilly: The AI Factory
Lilly isn’t experimenting. They’re industrializing.
In partnership with NVIDIA, Lilly is building the most powerful AI supercomputer owned by any pharmaceutical company — an “AI factory” managing the full lifecycle from data ingestion and training to fine-tuning and inference. They’ve committed $1 billion to a co-innovation AI lab in San Francisco, co-locating Lilly scientists with NVIDIA AI engineers to reinvent drug discovery.
Over 1,000 AI projects deployed. An estimated 1.4 million hours of human work saved. Their TuneLab platform — built on roughly $1 billion in proprietary data investment — is now open to external biotech companies. Total AI-adjacent infrastructure investments across new manufacturing facilities in Alabama ($6B), Virginia ($5B), Pennsylvania ($3.5B), and Ireland ($1.8B) demonstrate commitment at a scale that leaves no room for ambiguity.
Merck: AI360 — Full-Spectrum Strategy
Merck KGaA’s AI360 strategy spans all three business sectors: Healthcare, Life Science, and Electronics. What makes it compelling is its breadth — drug discovery, patient adherence through digital health platforms, and semiconductor material solutions, recognizing that Merck’s Electronics division directly enables the hardware that powers AI.
Their internal LLM platform, myGPT, has grown to over 27,000 regular users. Their TEDDY foundation models are pushing beyond the limits of existing gene regulatory network analysis. And they’re serving 80+ internal AI project teams globally — a scale that requires genuine organizational commitment, not just a strategy deck.
Bayer: Speed, Partnerships, and Internal Infrastructure
Bayer’s approach combines strategic partnerships with AI-native companies and deep internal infrastructure.
The Recursion Pharmaceuticals collaboration has expanded multiple times — most recently pivoting to precision oncology with an $80 million upfront investment and up to $1.5 billion in success-based payments. In January 2026, Bayer announced a three-year collaboration with Cradle for AI-enabled antibody discovery and optimization.
Internally, their Crop Science division works with 117 billion data points backed by a decade of data culture. Their E.L.Y. system for agronomists has delivered a 60% productivity improvement across 1,500+ frontline employees. The partnership model lets Bayer access cutting-edge capabilities without building everything in-house, while the internal data platform ensures institutional knowledge remains accessible.
AstraZeneca: AI in the Patient
AstraZeneca’s approach stands out because it reaches into clinical outcomes. Their QCS platform — a computational pathology system — quantifies biomarker targets across sub-cellular compartments and analyzes them in the context of spatial tissue organization.
The technology explained why the trial appeared to miss in the overall population — the drug worked, but for a specific subgroup that only AI could reliably identify. AstraZeneca is now co-developing a companion diagnostic with Roche for regulatory approval as a first-in-class AI-driven diagnostic.
This is AI moving from back-office optimization to direct patient impact.
The Next Frontier: Agentic AI
Here’s where it gets interesting — and where most companies are still confused.
There’s a distinction most companies still miss: what they call “agents” today are actually assistants. They retrieve information, summarize documents, answer questions. Useful, but not transformative.
True agentic AI systems — ones with a high degree of autonomy that structure their own workflows, make decisions, and take actions — are the next step. Industry leaders expect 2026 to be the year we actually see them in production. Not because the technology wasn’t there, but because “you have to see it to believe it” — and most organizations haven’t seen it yet.
The shift from assistants to agents isn’t incremental. It requires rethinking processes end-to-end, building new governance frameworks, and — critically — developing organizational trust in systems that act autonomously. The companies that started building “Freedom in a Frame” governance models two years ago will have a structural advantage here.
The RAG Revolution as Foundation
What connects all of these initiatives is the maturation of the underlying technology — particularly Retrieval Augmented Generation.
The ability to give an AI model access to your entire corporate knowledge base — regulatory documents, clinical trial data, manufacturing records, competitive intelligence — and have it retrieve and synthesize relevant information accurately has gone from fragile prototype to production-grade infrastructure.
This matters disproportionately for pharma. The industry generates enormous volumes of highly structured, highly regulated data. The potential value of making that data accessible through natural language interfaces is immense. But the tolerance for errors is near zero. A hallucinated safety signal, a misquoted SmPC section, an incorrect regulatory reference — these aren’t embarrassing. They’re dangerous.
The technology had to be good enough for pharma-grade reliability. It is now.
The Missing Piece: People
The infrastructure is ready. The models are capable. The RAG pipelines are reliable. The investment is flowing. But none of it matters without the right people in the right roles, making the right decisions.
This means leaders who understand both the technology and the regulatory context. Who can distinguish between a genuine use case and an expensive science project. Who know when to build internally and when to partner. Who can drive cultural change in traditionally conservative organizations — and do it fast enough to matter.
It means individual contributors — Medical Advisors, regulatory specialists, data scientists, clinical operations professionals — who are willing to learn new tools, challenge existing processes, and bridge the gap between domain expertise and AI capability.
And it means organizations that can reconfigure in weeks, not quarters. That treat recalibration as a feature, not a failure. That understand the cost of non-decision.
The technology is no longer the bottleneck. The question is whether your organization has the speed, the governance model, and the talent to actually use it.
The window is now. Are you ready?