Back to Articles

The AI Layoff Trap — What It Means for Medical Affairs

A new economics paper proves firms are trapped in an AI automation arms race. Here's why Medical Advisors and MSLs who adapt now will lead the teams of the future.

The Race No One Can Stop

In April 2026, economists Brett Hemenway Falk and Gerry Tsoukalas published a paper on arXiv that should be required reading for anyone in pharma who still thinks AI adoption is optional. “The AI Layoff Trap” proves mathematically what many of us feel intuitively: companies are locked in an automation arms race — and knowing it’s destructive doesn’t stop them.

The core finding is brutal in its simplicity. When one firm automates, it captures the full cost saving but destroys only a fraction of the consumer demand it depends on. The rest of the damage falls on competitors. Every firm sees this. Every firm automates anyway.

The authors frame this as a Prisoner’s Dilemma — one of the most famous concepts in game theory. Imagine two firms that could both agree to automate cautiously. They’d both be better off. But each firm has a dominant strategy: automate aggressively regardless of what the other does. If your competitor automates and you don’t, you lose on costs. If your competitor holds back and you automate, you gain market share. So both firms automate — and both end up worse off than if they’d shown restraint together. The rational move for each player produces an irrational outcome for all. That’s the trap.

This isn’t a tech industry problem. It’s an economics problem. And it’s coming for Pharma as well.

What the Paper Actually Says

The model strips the AI debate down to one mechanism: when companies replace workers with AI, those displaced workers stop spending. Each round of layoffs erodes the purchasing power all firms depend on. At the limit, firms automate their way to boundless productivity and zero demand.

The authors tested six policy instruments against this externality. Universal basic income doesn’t fix it — it raises the floor but leaves the automation incentive untouched. Capital income taxes don’t fix it — they scale profits down uniformly without changing the per-task math. Worker equity participation narrows the gap but can’t close it. Coasian bargaining fails because automating is a dominant strategy — no voluntary agreement between firms is self-enforcing, like the Prisoner’s Dilemma showed us.

Only one instrument actually works: a Pigouvian automation tax. The idea is simple — if your automation destroys jobs and those lost salaries hurt every other company’s revenue, you should pay for that damage. Think of it like a carbon tax, but for job displacement. And here’s the elegant part: the tax revenue funds retraining programs that help displaced workers land new roles. As more workers get reabsorbed, the damage shrinks — and so does the tax. The policy fixes itself over time.

The most counterintuitive finding: better AI makes the problem worse, not better. Each firm perceives a market-share gain from automating beyond its rivals, but at equilibrium these gains cancel out, leaving only a larger distortion. The authors call this the Red Queen effect — you have to run faster just to stay in place.

Now Translate This to Medical Affairs

If you’re a Medical Advisor or MSL reading this, the instinct might be: “This is macroeconomics. What does it have to do with me?”

Everything.

Pharma companies are fragmented markets. Dozens of companies compete in the same therapeutic areas with similar products, similar congress strategies, and similar KOL engagement models. That’s exactly the market structure where the automation trap bites hardest — the paper shows that more competition widens the gap between what’s individually rational and what’s collectively optimal.

AI has developed so fast in the last three years that it is already performing at the level of a top Medical Affairs interim. That’s not a prediction — that’s the current state. It can synthesize clinical data, draft medical responses, analyze congress abstracts, track KOL publication patterns, and generate slide decks. Today. But it’s still reactive.

Can you imagine where it will be in three years? AI is probably already the best medical advisor in the world — and with gains in agentic development, it will become proactive. Tireless, up-to-date on every publication, fluent in every therapeutic area simultaneously, and no longer waiting for your prompt.

So the question becomes uncomfortable but necessary: what’s left for you as a current Medical Advisor or MSL?

The Answer Isn’t Resistance — It’s Elevation

The paper’s own framework points to the answer. The automation externality disappears when displaced workers are reabsorbed into higher-value roles. When people move up rather than out — into positions that are more strategic, more human, more valuable — the trap dissolves. That’s the exit.

And here’s what makes Medical Affairs different from customer support or back-office operations: our work is fundamentally built on human relationships.

An AI can synthesize every publication on a biomarker in minutes. It cannot sit across from a KOL at dinner and hear the hesitation in their voice when they talk about a new treatment approach. It cannot read the room at an advisory board and sense which unspoken concern is blocking adoption. It cannot build the kind of trust with an oncologist that takes three congresses, two honest disagreements, and one late-night conversation about a difficult patient case.

Medical Affairs lives and dies by these relationships. The MSL who has spent years earning a clinician’s trust — who gets the call when a tricky case comes up, who gets invited to the pre-congress meeting — that person is irreplaceable. Not because of what they know, but because of who they are to the people they work with.

AI will handle the data, the literature, the reporting, the slide decks. That’s already happening. But the human layer — understanding a physician’s real-world frustrations, navigating the politics of a hospital formulary committee, recognizing when a medical education gap is actually a trust gap — that requires presence, empathy, and years of built relationships. No model can short-circuit the years it takes to become the person a top oncologist actually wants to hear from.

You will become the orchestrator of an army of AI agents. And these agents won’t sit idle waiting for your prompts. They will proactively update you with the newest publications, congress abstracts, and competitive intelligence — pre-summarized, pre-filtered, and ready to share with the specific KOLs who care about that exact topic.

Imagine starting your Monday morning to a briefing that was built overnight by a swarm of specialized agents. One agent monitored every major journal for new data in your indication. Another scanned the weekend’s conference sessions and flagged three abstracts that contradict a KOL’s published position. A third drafted a tailored email for each of your top ten physicians — different framing, different data, different tone — based on what you last discussed with them. A fourth prepared the slides you’ll need for Wednesday’s advisory board, already formatted in your company’s template.

Your job is no longer to produce this work. Your job is to review, judge, and act. Which insight is worth a phone call? Which KOL needs a face-to-face visit instead of an email? Which piece of data changes your therapeutic strategy — and which is noise? Those decisions require clinical judgment, political awareness, and personal knowledge of the people involved. Those decisions are still yours.

This is what makes the role more human, not less. When AI removes the administrative ballast, what’s left is the part of Medical Affairs that always mattered most: sitting with a physician, understanding their world, and helping them make better decisions for their patients.

The path forward for Medical Advisors and MSLs isn’t competing with AI on information tasks. It’s doubling down on being the trusted human partner in a world that’s about to be flooded with AI-generated content. The physicians who matter most will crave the opposite: someone who listens, who understands context, who shows up.

If you start early and adapt AI into every workflow, it doesn’t replace you. It frees you to spend more time on what actually moves the needle — relationships, strategy, and judgment. It elevates you into a Medical Affairs lead — someone who coordinates many AI agents while being the human center of gravity that holds it all together.

The Practical Playbook

The paper shows that upskilling is the most effective market-based lever against over-automation. Here’s what that looks like in practice:

  1. Audit your weekly tasks. Which ones could an AI agent do today? Be honest. Data extraction, literature monitoring, slide preparation, event reporting — these are already automatable.
  2. Learn to orchestrate, not execute. The future Medical Advisor doesn’t write the congress summary. They design the workflow that generates it, review the output, and make the strategic call about what it means for the medical plan.
  3. Build AI fluency now. Not coding — workflow design. Understand how to prompt, how to chain tasks, how to validate AI outputs against medical standards. This is the new core competency.
  4. Invest in what AI can’t replicate. Deep therapeutic expertise. Trusted HCP relationships. Cross-functional influence. The ability to sit in a room with a clinician and have the conversation that changes a treatment paradigm.

The Bigger Picture

The AI Layoff Trap paper proves that market forces alone won’t solve this — firms will automate beyond what’s collectively optimal because the incentive structure demands it. Pharma is no exception. The companies that are cutting Medical Affairs headcount today are not being reckless. They’re being rational. That’s what makes it a trap.

But traps have exits for individuals, even when markets are stuck. The Medical Advisors and MSLs who treat AI as a force multiplier rather than a threat will become the leaders who coordinate the next generation of medical affairs operations — part strategist, part AI orchestrator, part trusted medical partner.

The ones who wait will find themselves in exactly the position the paper describes: displaced not because they weren’t good enough, but because the competitive math left no other option.


This article was co-authored with Anthropic’s Claude Opus 4.6 model. The ideas, domain expertise, and editorial direction are mine — the AI helped structure, draft, and refine the text.

Dr. Artur Kokornaczyk
Dr. Artur Kokornaczyk

Medical Affairs Lead in Oncology with 10+ years of experience. Passionate about AI, digital strategy, and building systems that amplify the impact of medical science. More about me