Personal Project · 2026

9:41 ●●●●●
spent
April 2026
NARRATIVE INSIGHT
You ordered delivery 11 times in March, almost all on days your calendar showed meetings past 6pm.
REFLECTIVE PROMPT
You spent more on weekends this month. Do you know what changed?
Reply →

Spent

AI that connects your spending to the context of your life.

Type
Personal Project
Timeline
Jan-Apr 2026
Role
Solo designer + builder
Stack
React, Recharts, Claude

The problem with finance apps

People check their bank balance and feel a vague mix of guilt and confusion. Existing tools categorize spending. They answer "where did my money go?" with a list. But the more interesting question is why. You know you spent $847 on food last month. You don't know it was $300 more than usual because your calendar was packed with meetings that ran past 6pm, so you stopped cooking and ordered delivery every night. The context lives in your head, not your spreadsheet.

No financial tool connects spending data to the context of your life. The numbers exist in isolation, stripped of the story that makes them meaningful.

Healthcare AI lives in the same territory: sensitive data, high-trust relationships, a person trying to make a decision with incomplete information. The question worth exploring was whether the same design instincts transfer. Finance turned out to be the right domain to find out.

Four interaction patterns for sensitive AI

These are the design contributions of the project. Each addresses a specific challenge in AI-to-human communication, and each maps directly to problems solved in clinical AI at Eleos Health.

Pattern 01
Narrative Insight

Lead with the observation, not the number. The AI synthesizes spending data plus life context into a plain-language story that feels like a friend noticing something rather than a system reporting it.

Design challengeTone calibration. Connect behavior to context, not behavior to morality. Let the user draw their own conclusion, not the AI.
Clinical parallel
The same problem exists when AI flags a documentation issue to a therapist. The insight must be useful without undermining the clinician's judgment or triggering defensiveness.
Narrative insight · Food & Delivery
Your busy weeks cost you an extra $180
You ordered delivery on 11 days in March, almost all when your calendar showed meetings past 6pm.
Tap to see what connected →

Pattern 02
Reflective Prompt

Instead of always telling you what it found, the AI sometimes asks. It points to where to look and lets you do the sense-making. You supply the context it cannot access.

Design challengeWhen should AI tell vs. ask? Too many insights feels like surveillance. Too many questions feels like interrogation. Tell when there is a specific, non-obvious connection. Ask when it notices a change but lacks context to explain it.
Clinical parallel
There are moments to flag an issue explicitly and moments to prompt the clinician to review something themselves. The tell/ask boundary is the same design problem in a different domain.
Reflective prompt · Weekends
"You spent noticeably more on weekends this month. Do you know what changed?"
Work was stressful
I traveled a lot
Not sure
+ Add your own reflection

Pattern 03
Progressive Disclosure of Reasoning

When the AI surfaces an insight, you can expand to see how it connected the dots. Three layers of depth, entirely user-controlled. Trust is built by verifiability, even when users rarely verify.

Design challengeTransparency without clutter. The insight should be immediate. The reasoning should be available but never forced. Each layer builds confidence that the AI is not fabricating patterns.
Clinical parallel
The same pattern is used in LQA: showing clinicians why AI flagged a compliance issue. Users trust a system more when they can verify its reasoning, even if they rarely choose to.
Layer 1 · The insight
Your delivery spend correlates directly with late-meeting weeks.
Layer 2 · Supporting data
DoorDash ×6, Uber Eats ×3, Caviar ×2 · all after 7pm
Show transactions ↓
Layer 3 · AI reasoning
Pattern held 8 of 12 weeks · 4.2× higher on late-meeting days
Show reasoning ↓

Pattern 04
Compounding Memory

The AI accumulates context from reflections and behavioral data, and visibly uses it to make future insights more personal. The system gets smarter. You return because it understands you better than it did last week.

Design challengeMaking AI learning visible without it feeling creepy. The answer is radical transparency: a Memory screen that shows exactly what the AI knows, how it learned it, and full user control to correct or delete anything.
Clinical parallel
This maps to AI systems that improve with clinician feedback. How do you make the learning loop visible so users trust that their input matters, without over-promising AI capability?
What I've learned about you
You cook when work feels manageable
From your reflection on March 14
Shaped by you
Late meetings reliably lead to delivery
Observed across 8 weeks of patterns
Weekends = social spending, not stress
From your reflection on April 2
Shaped by you

Six decisions that shaped the product

Narrative first, data second
The words are the interface. Charts and numbers support the story; they never lead it. This is the opposite of every finance app audited, and it is intentional. Spent is about understanding, not tracking. A number tells you what happened. A narrative helps you understand why.
Honest visualization
Early iterations used smooth area charts with monotone interpolation. They looked polished but they were lying: implying continuous data between discrete transactions. Spending happens in bursts, not curves. Every smooth chart was replaced with bar charts. Discrete bars for discrete transactions. The rule: chart types should reflect the nature of the data. Never smooth away the gaps.
Feed not dashboard
Dashboards invite scanning. Feeds invite reading. Since the narrative is the product, the interface should encourage reading. The home screen is a chronological stream of typed insight cards, two to five per week, not a daily deluge. Sparse by design. Quiet weeks are data too.
Warm palette not fintech blue
Spent should feel like a journal, not a bank. Soft ambers, muted sage greens, warm coral. Green is desaturated (not money green), coral replaces red (red triggers financial anxiety), and plum marks memory and reflection elements: things that are personal, not transactional.
Sensitive categories
The settings screen includes a "Sensitive categories" control: topics the AI will never comment on. Healthcare, therapy, alcohol, gifts. This is a binary promise: I will not comment on spending in these categories. Ever. No explanation required from the user. The most powerful trust-building feature is sometimes the thing the AI does not say.
The follow-up question
When you share a reflection, the AI does not just acknowledge it. It asks a follow-up. "Would you rather I flag social weekends separately, or just add context?" Quick-tap response options. This turned a one-shot form submission into a brief conversation and surfaces preference data that makes the AI more useful over time.

A working React prototype

Six screens with real navigation, 40+ simulated transactions with merchant names and calendar context, and interactive elements throughout. Tap an insight to see the full story, expand the AI reasoning, or explore the Memory screen to see what it has learned. Built with React, Recharts, and Claude.

Five iterations, each with a specific lesson

01
Initial build
All screens functional. Smooth area charts. Pre-written AI narratives. The structure worked. The data did not feel real.
02
Data visualization honesty
Feedback: "The squiggly lines feel arbitrary." Replaced all smooth area charts with bar charts. Spending happens in transactions, not a continuous flow. Established the principle: chart types should reflect the nature of the data. Never smooth away the gaps.
03
Transaction grounding
Feedback: "I want to click in and see how much was spent and where." Added 27 transactions with merchant names, amounts, and calendar context. Real brand names: DoorDash, Trader Joe's, Sweetgreen, Uber Eats, Tatte Bakery. The prototype went from theoretical to tangible. You could now verify the AI's claims yourself, which is exactly how trust works.
04
Retention and personality
Feedback: "What would keep someone coming back?" Added the AI Memory system, compounding intelligence, "Shaped by You" tags, reflection impact statements, and the pulsing alive indicator. Introduced the fourth AI pattern. The retention mechanism is the AI getting smarter over time.
05
Depth and realism
Added subscription and social spending insight types. Made Spending Story categories tappable with drill-down transaction lists. Built the reflection follow-up conversation. Expanded to 40+ transactions across five categories. Prototype grew from 3 feed cards to 5, from 2 insight types to 4.

The design argument

This is not a finance app. It is a design argument. The four AI interaction patterns documented here are the same patterns used at Eleos Health to help clinicians trust AI-generated documentation feedback. The domain changed. The design thinking did not.

"AI products that surface personal insights need to earn trust through tone, timing, and transparency, whether the stakes are clinical documentation or your credit card statement."

0→1
Concept through execution with no existing constraints or team.
4
AI interaction patterns documented with enterprise parallels, transferable to any sensitive-data product.
1.5k+
Lines of React. A working prototype, not mockups. Real navigation, real transactions, real interactions.
5
Iterations from initial build to a product worth returning to.