
Why Most AI Tools Miss the Real Workflow
A few months ago, I saw a LinkedIn post by Dr. Paulius Mui sharing a decade-old paper that mapped the complete workflow of primary care physicians during patient visits.
The researchers found 191 distinct tasks in a routine clinical encounter.
These researchers observed 30 primary care physicians across 17 clinics in Wisconsin and Iowa—mixing urban and rural settings, academic and community practices, and both EHR and paper-based systems. They watched real patient visits, recorded every task physicians performed, and systematically coded the data. The result was a comprehensive task list with 12 major categories, 189 subtasks, and 191 total distinct tasks. Tasks ranged from the obvious (gathering chief complaint, performing physical exam) to the granular (logging into EHR, reviewing scratch paper notes, answering pages mid-visit).
191 tasks. Something that seems so simple from a distance is actually incredibly complex.
I call this the bird's-eye vs. mouse-eye problem in workflow analysis.
The bird's-eye view is high-level—100 feet above the workflow. It's the big picture: patient arrives, vitals are taken, physician enters, history is gathered, exam is performed, plan is discussed, patient leaves.
The mouse-eye view is on-the-ground. It sees every nook and cranny: patient checks in, hands insurance card to front desk, front desk confirms insurance, patient waits, medical assistant places blood pressure cuff, records vitals on paper, logs into EHR, updates chart.
If you imagine a hawk hunting a mouse, the hawk watches the mouse navigate through the grass. But the mouse is slipping over gravel, gnawing through tall grass, dropping into divots—details the hawk can't see.
That's the problem with workflow analysis. Most people building tools for physicians are operating from the bird's-eye view.
This paper was written when EHRs were being rolled out under the ACA's meaningful use requirements. The authors wanted to give clinics a tool to map their workflows before implementing major changes—so they could plan intelligently rather than react to chaos.
Now, I'd assume 99% of physicians use an EHR, and hundreds—if not thousands—of AI companies are trying to figure out how to integrate their solutions into physicians’ workflows.
And most of them are doing it blind.
Unless you're working in a healthcare setting with direct access to physicians, getting the mouse-eye perspective is difficult. You can interview physicians about their workflows, but interviews miss nuance. You can shadow them. That's the gold standard—because you see the details that don't make it into a verbal description.
Take one example: in many exam rooms, the computer faces the wall, and the patient sits in the exam chair behind the physician. So during history-taking, the physician is constantly turning back and forth between patient and computer.
Sometimes they're even talking to the computer instead of the patient.
It's grossly inefficient and terrible for bedside manner. But unless you're in the room, you wouldn't know to design around it.
Before medical school, I worked on an automated waitlist solution for outpatient doctors. Fellow Huddler Jonny Blum and I would go to physician practices in the Bethlehem area and literally just watch how the front desk operated. We saw how staff moved their mouse between pages, where they got stuck, what workarounds they used. From that, we built detailed process maps—both high-level and granular. If you want to see the process maps we built to get a feel for what we did, just let me know and I’ll send you the link.
The benefits of these granular workflows are that you can iterate on them, time them, find the bottlenecks and pain points, and identify the actual problem your solution should address—not the problem you assume exists from 100 feet up.
Why This Matters Now
We're in the middle of an AI implementation wave in healthcare. Every health system is being pitched ambient documentation tools, clinical decision support, automated coding, AI scribes, diagnostic assistants.
Most of these tools are built by people who have never watched a physician work. They're designed from the bird's-eye view. They assume workflows are linear—that physicians have time to review AI-generated summaries, that the EHR interface is intuitive, that the physician is sitting at a desk rather than standing in a hallway with a phone in one hand and a faxed patient chart in the other.
If you're evaluating AI tools for your practice or health system, here's what you should demand:
Ask vendors if they've mapped your actual workflow. Have they shadowed your physicians? Have they timed tasks? Do they know where the bottlenecks are?
Ask how their tool integrates at the task level. Which of those 191 tasks does it replace? Which does it add? Where does it create new handoffs or require new clicks?
Ask what happens when the tool breaks the workflow. What's the fallback? How much training is required? How long does it take a physician to override the AI when it's wrong?
If a vendor can't answer these questions, they're operating from the hawk's perspective. And you're the mouse on the ground, about to get more work dumped on you.
In summary, workflows are not simple. They never were. And anyone building tools for physicians without doing the ground-level work to understand them is setting you—and themselves—up for failure.






