Bridging the Gap Between Clinical Needs and Technical Reality
I’ve spent enough time in both worlds to see the same frustration play out repeatedly: a healthcare company pitches an idea, developers nod along, and six months later everything’s misaligned. The software solves the wrong problem, or solves the right problem in a way that doctors would never actually use.
The gap isn’t really about technology or medicine. It’s about translation.
Why the Communication Breaks Down
Doctors think in patterns and workflows. We notice a problem in practice, see it happening repeatedly, and think “there’s got to be a better way.” Then we go find developers and describe the problem the way we think about it.
Developers, meanwhile, are already problem-solving. They hear “we need better EHR integration” and immediately start designing systems. They ask clarifying questions—good ones—but they’re asking from a technical frame. What does “integration” actually mean? What’s the data model? Where does the source of truth live?
These are the right questions. But if a clinician hasn’t thought about data models, they hear “can’t you just connect the systems?” It feels like the developer is overcomplicating something simple.
Neither side is wrong. You’re just speaking different languages.
Red Flags I’ve Learned to Notice
Over time, I’ve started recognizing patterns that usually mean a project is going to struggle:
“We don’t have a problem, we just need an app.” This is the biggest one. If a company can’t articulate the actual workflow problem they’re solving, they’re building features, not products. I’ve seen this with startups especially—they’re excited about the technology, not the problem. It usually ends badly.
“All our users will definitely use this.” If the team is sure without testing, that’s a warning sign. Clinical workflows are stubborn. People have ways of doing things that make sense given their constraints. A solution that ignores those constraints won’t get adopted, no matter how elegant it is.
Misalignment on what “done” looks like. One team thinks done means “software ships.” Another thinks it means “doctors can actually use it in their workflow without breaking their day.” These need to match before you start building.
Regulatory ambiguity treated as not my problem. If a company is building something that might be a medical device and they’re not willing to think through that upfront, you’re going to have problems. This isn’t scaremongering—this is just reality. Better to figure it out early.
Questions I Ask Now (Before Saying Yes)
If someone approaches me about advising on healthcare software, I’ve gotten more deliberate about what I’m actually signing up for. These are the questions that matter:
“What’s the actual problem you’re solving? Tell me a specific story.” Not the elevator pitch. A real scenario. A doctor at 2 PM on a Tuesday saying “this part of my day sucks because…” That’s where the real problem lives.
“Have you talked to actual users? What did they say?” Not focus groups. Not surveys. Actual conversations with people doing the work. If the answer is no, that’s fine—but I need to know that’s stage we’re at.
“What happens if nobody uses it?” This shows whether the team is grounded. If they haven’t thought about that, they’re in fantasy mode.
“Are you willing to change your idea based on what you learn?” Some teams ask for advice and then don’t take it because they’re already committed emotionally to the solution. That’s not advisory—that’s theater.
“Do you understand the regulatory landscape here?” Whether you’re building a medical device or not, there are rules that matter. If the team hasn’t thought about it, I want to know. If they have and disagree with me on the classification, that’s a conversation. But they need to have considered it.
The Why I’m Actually Here
Here’s the honest part: I started advising on healthcare software because I got tired of seeing good ideas fail because the teams building them didn’t have someone who could translate between the clinical world and the technical world.
Most companies don’t need another voice in the room. They need someone who understands that an “inefficient workflow” in medicine sometimes exists for reasons—liability, institutional constraints, learned approaches that work in practice even if they look illogical from outside.
They also need someone who understands that developers aren’t being difficult when they ask about edge cases and error handling. They’re being responsible.
The bridge isn’t magic. It’s mostly listening hard to both sides and sometimes saying “I think you’re both right, and here’s what needs to happen for that to actually work.”
That’s the work I’m interested in.