
AI Brain Rot: Why Every Healthcare AI Announcement Sounds the Same
I've contributed to a growing problem that now subtly drains my mental energy—brain rot, maybe? Every day, almost every hour, I get announcements or emails about AI companies launching revolutionary solutions that promise to reduce burnout or boost productivity and patient engagement.
AI is everywhere now. I've been writing about it nonstop. But somehow, it feels like nothing has really changed—not relative to the sheer number of new companies and AI services flooding the market.
Every AI announcement sounds the same, and my reaction has become reflexive: “That's great. Next.”
In this article, I'm doing my own self-therapy to understand why I feel this way.
What’s Annoying Me
Because I write this newsletter, I receive emails every 30 minutes—I'm not exaggerating—from communications companies alerting me to a new AI company launching a revolutionary product. Always asking if I'd like to speak with the CEO. Here's an example from this past week:
Hi Jared. Today, [Company A] launched its next generation of clinical AI agents—bringing patients the speech and personalization they expect from human care.[…] Let me know if you'd be interested in speaking with [the CEO]. I'd be happy to set up a conversation.
Currently there are 100+ AI scribes (found this after a quick search on Elion) that are “game changing.” The market is flooded and all of these companies do roughly the same thing. It's now a commodity. If you're in the documentation space and not doing AI scribes, then what are you doing?
Then we have daily health system AI partnership announcements. These are exciting, given health systems are typically slow to change (I talk more about this in my AI and Healthcare Utilization course). The AI hype is pushing things forward. But how much forward? Kaiser's 2.5M encounter study showed ambient AI saved only 0.7 minutes per note for heavy users. So maybe not that much forward? This perfectly captures the "promise vs. reality" tension of AI solutions.
Lastly, the burnout claim is catchy. Everyone says they reduce burnout by solving a single factor of it. This makes me eye-roll every time I see the announcement. Personally, I've been using AI tools since day one. My cognitive load has freed up, but burnout-wise—I feel the same… just ask my wife (who is a therapist). I still work the same hours. Maybe I have more time to crack jokes with my colleagues. My jokes have improved.
Why the gap between promise and reality?
There's an implementation gap where tools get announced, but they struggle to integrate into workflows and scale. I see the press releases, which show sleek demos. But then the product sits in pilot mode for months—or gets rolled out to a handful of early adopters who don't represent the messy reality of most clinical settings.
Companies want proof their solution works—and works better than competitors’. (I wrote about this back in 2023 in Are Digital Health Solutions Actually Solutions—same issue, different technology). But measuring true impact is hard.
Take time saved on a single task. Sure, an AI scribe might save 0.7 minutes per note. But what about the cognitive load of reviewing that note for errors? What about time spent training staff, troubleshooting integration issues, or managing edge cases where the AI gets it wrong?
Then there's the patient question. What do patients get out of this? Most AI tools make our workflows more efficient. But if that efficiency doesn't translate into better care, more face time, or improved outcomes—what are we really optimizing for?
And that brings me to the race. Raise, raise, raise. Demo prototypes to hospitals and physicians, get the headline, close the funding round. But the focus on tangible clinical utility still feels off.
There's lots of attention on problems that attract investors and headlines—ambient documentation, clinical decision support, predictive analytics. Less focus on the boring workflow problems that, when compounded, cause the burnout and inefficiencies we actually experience day to day. This includes inundated inbasket messages, redundant data entries, prior auth forms, reviewing faxes … the list goes on. While these aren't sexy problems, they’re the ones that drain us.
Maybe we just can't measure it
Do we have a measurement problem? For example, how do you measure cognitive load? There's no RVU for “I didn't have to context-switch five times during this encounter.” No billing code for “I actually had time to think.”
Physicians who critique cardiology RCTs for not using hard outcomes like mortality as the primary endpoint are equally quick to critique AI solutions that lack hard outcomes. We want evidence and data. But these are hard to measure in ways that satisfy our clinical standards. Either the important measures—like burnout—can't be operationally defined in a robust way, or the time to see results is too long for a startup burning through cash (they need results NOW).
Traditional metrics—RVUs, throughput, length of stay—don’t capture what matters: decision quality, cognitive burden, time to think (the stuff I keep writing about). These are the metrics that matter to physicians. But physicians aren’t the ones defining success for these products—administrators and investors are.
So if we can’t measure what matters, and the wrong people are defining success, what should we actually be paying attention to?
Dashevsky’s Dissection: So What?
AI is everywhere now, but somehow it feels like nothing has really changed from a physician's perspective. Much of the decision-making around onboarding AI tools systemwide is out of our control, but we at least have a voice in what we want. We should focus on tools that make our days feel lighter—hard to define objectively, as I mentioned above, but we know the feeling:
Finishing your notes before you leave the hospital instead of at 10 PM.
Not dreading your inbasket when you log into Epic Monday morning.
Having the mental space to actually listen to a patient instead of thinking about the fifteen other things you need to document.
When possible, we should hold vendors to higher standards. This includes demanding evidence of sustained impact—not just a pilot study with ten hand-picked users. I know a couple of AI companies partnering with health systems to run real studies of their platforms and present results at conferences. That's the bar.
Right now, I have my AI tools that I rely on daily, and I’m extremely comfortable using them. I can see myself being stubborn about trying new tools—why switch when I already have a workflow that works? We’re creatures of habit, especially when it comes to our clinical routines. Maybe that’s why the race to be first matters so much in this space. For now, I’m staying cautiously optimistic—but holding out for tools that actually solve the boring problems. The flashy ones can wait.





