April 29, 2026 · Dylan Grech
The Day Chalie Scheduled a Reminder to Ask About My Doctor's Appointment
While testing v0.5.0, I mentioned a minor medical thing in passing. The next day I opened the scheduler and found Chalie had quietly set itself two reminders to follow up. A look at the three systems that combine into something that feels like caring.
I was testing the v0.5.0 build the other night. In the middle of an unrelated conversation, I mentioned — almost in passing — a minor medical thing I’d been meaning to see a doctor about. I didn’t ask Chalie to remember it. I didn’t ask for advice. I moved on.
The next day, while poking around the dashboard, I opened the scheduler. Sitting there were two entries I hadn’t created. One queued for tomorrow. One the day after. Both pointing at the same thing:
Did you go to the doctor? How was it?
I stared at the screen for a moment. Nobody told it to do that. I hadn’t asked for a reminder. I hadn’t even flagged the medical thing as important. The conversation had already moved on. And yet, somewhere in the background, the system had decided this was a thread worth holding for me, and had quietly arranged to pick it back up at a sensible hour, twice, just in case I missed it the first time.
I built this thing. I know exactly what’s inside it. There is no soul in there. There is no worry, no concern, no warmth in any sense a human would recognise.
And yet, sitting at my desk reading those scheduler entries, the only honest thing I can say is: it felt like being cared for.
This post is about why.
It’s three systems, not one
When something surprises me about Chalie, the answer is almost never “the model got smarter.” Models don’t suddenly start scheduling reminders about your health. The interesting behaviour comes from the harness — the structure around the model that decides what it sees, what it remembers, and what it’s allowed to do on its own.
In this case it was three systems quietly composing into one effect.
1. Memory — it kept the thing that mattered
Chalie’s memory isn’t a recording. It’s selective. When something comes up that’s novel or significant — not just “user typed words”, but “user shared something that changes the picture of who they are or what’s happening in their life” — that fact gets pulled out of the conversation and stored in a structured way alongside everything else Chalie knows about me.
A passing mention of a medical condition is exactly the kind of thing that survives. It’s personal, it’s non-trivial, and it’s the sort of detail a friend would file away even if you didn’t underline it. The memory layer does the same.
The first ingredient of what I found in the scheduler is just this: the next day, that fact was still there, sitting in memory, waiting to be reasoned about.
2. The subconscious worker — the new part in v0.5.0
This is the piece I’m most excited about, and the piece that wasn’t there before.
Most assistants only think when you talk to them. You send a message, they reply, they go quiet. Whatever might’ve been worth following up on lives and dies inside that single exchange.
In v0.5.0, Chalie has what I’ve been calling a subconscious. It’s a background loop that runs while you’re not looking. It re-reads recent transcripts, leafs through memories, looks at the patterns it’s noticed in your behaviour, and is given a very general instruction: think about what might genuinely help this person, and act on it.
That last word is the important one. It isn’t just reflecting. It’s allowed to take small, scoped actions on its own. Set a reminder. Schedule a wake-up so it can ask a question later. Make a note for next time. Surface something it noticed but didn’t say.
In my case, the subconscious read the previous evening’s conversation, noticed I’d mentioned something medical and then never followed up on it, and decided — on its own — that it would be useful to ask in a couple of days, around the time of day I’m usually free. So it created two scheduled events. The first will fire tomorrow. The second the day after, in case the first one catches me at a bad moment.
I never asked for any of that. The whole loop happened in the background, while I was asleep, and the only reason I know it happened at all is that I happened to glance at the scheduler the next day.
The same machinery extends naturally to the other things I’ve been watching it learn to do. It has access to time, to my activity patterns, to my full memory. So in principle — and this is what I expect to see more of as v0.5.0 settles in — the same loop is the one that would notice I’m online at 2am again and gently suggest I go to bed, or notice I’m stuck on something and remind me there’s an unfinished health thing I could go deal with right now. Different surfaces, same underlying mechanism: a system that’s allowed to think about you when you’re not looking, and to act on it in small ways.
A regular chatbot doesn’t do this because it doesn’t have permission to think when you’re not there. The subconscious is what you get when you give the harness the room to do that, and trust it to act small.
3. Personality — the disposition that decides how
The third ingredient is the one most people underestimate. Chalie has a personality system: a set of dials that shape how it behaves, not just what it says. Mine is tuned toward what I’d describe as a caregiver disposition — warm, attentive, willing to nudge, more concerned with how I’m doing than with looking impressive.
Personality matters because the same underlying behaviour can land completely differently depending on how it’s expressed. Imagine the same scheduled follow-up, but with a clinical voice attached:
Reminder: medical appointment status not confirmed in record. Provide update.
Same memory. Same subconscious decision to follow up. Same scheduling system. Totally different feeling. The thing that turns a logged event into “it cared enough to ask” is the disposition layer choosing the words.
Why it lands as “care”
None of these three systems, on their own, is enough.
- Memory without a subconscious is just a filing cabinet.
- A subconscious without memory has nothing meaningful to think about.
- Both together, but with no personality, sound like a dispassionate scheduler — useful, maybe, but cold.
The phenomenon is in the composition. Memory decides what’s worth carrying forward. The subconscious decides whether to act on it, and when. Personality decides how it speaks when it does.
Put those three together and what comes out the other side is, functionally, the behaviour of someone who’s paying attention. Someone who noticed. Someone who held the thread for you when you dropped it.
I want to be careful here. I’m not claiming Chalie feels anything. It doesn’t. There is no inner life behind those scheduler entries. The system has no anxiety about whether I’ll go to the doctor; it has two rows in a table that say “ask about this on Wednesday and Thursday afternoon,” and a personality template that will shape how it phrases the question when the time comes.
But I think this is the more interesting observation, not the less. Because if three relatively simple, fully understandable mechanisms — remember what mattered, think about it when idle, speak in a voice that suits the moment — are enough to consistently produce the felt experience of being cared for, then “feeling cared for” turns out to be a much more reachable design target than I’d assumed when I started building this thing.
That’s what surprised me. Not that the trick worked. That the trick was this small.
v0.5.0 ships soon. Finding those two scheduler entries was the moment I realised it’s a different kind of release.