Carly Raizon

I design AI products that work in complex, high-stakes environments.

At Eleos Health, I design AI systems for behavioral healthcare. My work spans documentation, compliance, and revenue, but the core challenge remains the same: ensuring the system is understandable, trustworthy, and usable in practice.

app.eleos.health
Overview Progress Note Treatment Plan
Save Draft Submit Note Diagnoses
Session Type
Individual Therapy · 53 min
Clinical Note
Client presented with decreased anxiety symptoms. Reviewed progress on behavioral activation goals...
Diagnosis
Required: not yet entered
Note Quality
7 checks complete
Actionable Plan
Golden Thread
Medical Necessity
!
Diagnosis Required
ICD-10 code must be linked to justify medical necessity.
Risk Assessment
Check Note Quality
AI · Clinical Chrome Extension Eleos Health 2024–2026

Live Quality Assist

The central design problem wasn't the UI. It was timing. Compliance review happened months after submission, when nothing could be fixed. I designed LQA to move that check into the moment of writing: nudges over blocks, factor-level reasoning over scores.

Catches documentation issues at the moment of writing, not weeks after submission

CHART EHR
Patient Session Notes
Session Type
Individual Therapy
Clinical Note
Generating from session transcript...
Diagnosis
F33.1 · MDD, Recurrent
eleos
REC
00:23:47
AI Draft, In Progress
End Session
Audio · AI Infrastructure 2024

Embedded Audio

Invisible when it works, impossible to miss when it doesn't. I designed the recording flow and five distinct error states, each requiring a different response, for a context where a silent failure means finishing a 60-minute session with nothing to show for it.

Became one of the most-used features in the company at launch

Internal Tools Billing · AI Eleos Health 2024–2026

Coding Back Office

Providers undercode not from carelessness, but because documentation complexity makes the right code genuinely hard to know. The design challenge: surface AI-suggested codes with enough reasoning to be trusted, and enough transparency to survive an audit.

Surfaces coding gaps hidden in existing billing data

spent April 2026
Narrative Insight

You ordered delivery 11x in March, mostly on days with late meetings.

Reflective Prompt

You spent more on weekends this month. Do you know what changed?

Personal Project AI Interaction Design React

Spent

A side project: an AI that connects your spending to context: your calendar, your habits, what you've told it. Built to explore four interaction patterns around how AI earns trust, with a working prototype and observations that map directly to the enterprise problems I work on.

4 AI interaction patterns documented with enterprise parallels

The hardest part of AI product design isn't the AI.

Models can be remarkably capable. The failure is almost always in the interface: how output gets presented, how people verify it, and what they're expected to do with it.

The products I work on are only useful if people actually rely on them. That means the real design problem is rarely the AI output itself; it's the feedback loops, confidence cues, and review flows that determine whether someone acts on what the model says.

  • When to automate and when to keep humans in the loop: in LQA, hard blocks created resistance; nudges at the right moment created compliance.
  • How to make AI output interpretable without making it noisy: factor-level reasoning instead of confidence scores, because a number doesn't tell you what to fix.
  • How to design for adoption: Embedded Audio only works if clinicians actually start recording, which means the session start has to feel like nothing at all.

What I keep coming back to isn't the model. It's the moment a person decides whether to act on what it says, and what the design did, or didn't do, to get them there.

The interface is where AI earns trust.

If you're building in a domain where the design decisions carry real weight, I'd like to work on that. I do the research, the Figma, and the code.

carlyraizon@gmail.com