I design AI products that work in complex, high-stakes environments.
At Eleos Health, I design AI systems for behavioral healthcare. My work spans documentation, compliance, and revenue, but the core challenge remains the same: ensuring the system is understandable, trustworthy, and usable in practice.
Live Quality Assist
The central design problem wasn't the UI. It was timing. Compliance review happened months after submission, when nothing could be fixed. I designed LQA to move that check into the moment of writing: nudges over blocks, factor-level reasoning over scores.
Catches documentation issues at the moment of writing, not weeks after submission
Embedded Audio
Invisible when it works, impossible to miss when it doesn't. I designed the recording flow and five distinct error states, each requiring a different response, for a context where a silent failure means finishing a 60-minute session with nothing to show for it.
Became one of the most-used features in the company at launch
Coding Back Office
Providers undercode not from carelessness, but because documentation complexity makes the right code genuinely hard to know. The design challenge: surface AI-suggested codes with enough reasoning to be trusted, and enough transparency to survive an audit.
Surfaces coding gaps hidden in existing billing data
Spent
A side project: an AI that connects your spending to context: your calendar, your habits, what you've told it. Built to explore four interaction patterns around how AI earns trust, with a working prototype and observations that map directly to the enterprise problems I work on.
4 AI interaction patterns documented with enterprise parallels
Design Approach
Models can be remarkably capable. The failure is almost always in the interface: how output gets presented, how people verify it, and what they're expected to do with it.
The products I work on are only useful if people actually rely on them. That means the real design problem is rarely the AI output itself; it's the feedback loops, confidence cues, and review flows that determine whether someone acts on what the model says.
What I keep coming back to isn't the model. It's the moment a person decides whether to act on what it says, and what the design did, or didn't do, to get them there.
Get in touch
If you're building in a domain where the design decisions carry real weight, I'd like to work on that. I do the research, the Figma, and the code.
carlyraizon@gmail.com