HUDDLED WITH
I’m getting more excited about AI copilots and agents—and less sold on ambient documentation.
Copilots tackle the nagging chores that really steal our time, while ambient scribes keep promising big note relief but barely move the needle.
In this article, I’ll give you a quicky intro to ambient AI, walk through three new-ish studies on how it’s faring, and share why I’m betting on those “hardworking” agents instead.
Ambient AI promises fewer keyboard strokes, less “pajama charting,” and more face time with patients. It sounds great.
Here’s how it works: Ambient AI slips a mic into the exam room, streams the doctor-patient conversation to a secure cloud, and flips the audio into real-time text. A large-language model assembles a tidy SOAP draft that lands back in the EHR seconds later (as long as the software is integrated—this is a key caveat). The physician skims, tweaks, and signs—no typing from scratch—and every edit makes the system smarter for next time.
my poorly designed sketch
The next question, of course, is: does this actually save physicians time? Keep reading.
There are three recent studies that I recommend folks read to view the latest data on ambient AI and understand its true benefits (or lack there of). I’ve summarized the three studies below:
In a five-week pre/post pilot at UPenn, 46 clinicians across 17 specialties used an EHR-integrated Nuance DAX ambient scribe and shaved documentation time from 10.3 to 8.2 minutes per visit (-20.4%), bumped same-day encounter closure from 66% to 72%, and cut nightly “pajama-time” by 15 minutes (-30%). Cognitive load scores (NASA-TLX) dropped by about six points, and burnout prevalence nudged down from 42% to 35% (not statistically significant), while qualitative feedback praised improved face-time yet flagged the editing and proofreading burden.
In a pre-/post pilot at Sutter Health, 100 ambulatory clinicians (58% primary care) received access to an Abridge ambient-AI scribe integrated with Epic. Three months after going live, the mean time in notes fell from 6.2 to 5.3 minutes per visit (-0.9 min). Surveyed data on “pajama-time” found the share spending ≤1 hour a week on notes jumped from 14% to 54% (P < .001). However, there was no actual significant difference in the objective data on time spent in EHR after hours. Cognitive load scores halved (NASA-TLX) to around six points as seen in the above study from Penn. More than 90% said the tool let them give patients undivided attention, and 72% felt it boosted overall job satisfaction.
I wrote about this study several weeks ago (read it here), but in brief, Kaiser Permanente published ambient AI scribe outcomes across more than 2.5 million patient encounters. The highest-volume users—those with the most documentation burden going in—saved an average of 0.7 minutes (42 seconds) per note. Low users saved only 0.15 minutes (9 seconds). Across 2.5 million encounters, those small savings added up to 15,700 hours of documentation time saved—roughly 1,794 full workdays.
The big takeaway: ambient AI scribes do trim documentation time, but the per-encounter savings are measured in seconds-to-low-single-minutes, not the dramatic “hours back” many clinicians—including me—hope for.
Ambient scribes are helpful—the numbers prove it—but they’re nowhere near that magic fix. We’re shaving off a minute or so from a six-minute note while the real tasks eating our time—pre-charting, writing patient letters, chasing lab results, and wrestling with prior auth—keep piling up. It’s that extra admin, not the note itself, that drains me (and you, too).
Honestly, I don’t hate writing notes. They help me think. What wears me out is filling out DME forms, drafting prior-auth letters, and explaining “normal” results in inbox messages. That’s the work that spills into my evenings.
This is the exact area where AI can move the needle. Copilots that zero-in on those repetitive, non-clinical tasks could reclaim far more headspace than an ambient scribe ever will. These single-purpose agents are already startlingly good:
Pre-charting: “Summarize last week’s hospitalization,” and the copilot spits back a clean, problem-oriented snapshot—no ten-minute dig through a patient’s chart.
Billing: Let an agent sweep your note for every billable diagnosis and justify the level of service without you hunting for codes.
Prior auth: One prompt and the copilot drafts a letter that pulls in the patient’s history and the latest evidence, ready to send off to the insurer.
Those are just a few examples of what a copilot could do, but the potential impact is massive. We should aim AI at every task that isn’t clinically meaningful. And, perhaps it’s a hot take, but I do think writing notes are clinically meaningful if they’re helping you formulate your differentials and A&P.
COMMUNITY POLL
I’m trying to tighten the community:
Fill out this form if you’re interested in small, niche in person meet ups (Huddle+ members would get priority).
Let me know, too 👇️
What is your preferred platform for community building? |
HUDDLE UNIVERSITY
I’m working on a new course—How Healthcare Really Works—launching July 1.
If you want to actually understand how hospitals get paid, why drug prices are so high, and how the system really functions, you can pre-enroll now and get $50 off.
Reply