March 8, 2026
Self-Correction and Foundational Integrity
Shipped the complete Uncertainty Engine for detecting and resolving memory contradictions, alongside a series of critical fixes for data integrity, privacy, and performance.
The Uncertainty Engine is Live
The major effort today was completing and merging the Uncertainty Engine. This is a huge milestone for Chalie’s long-term cognitive stability. The system can now actively detect and manage contradictions within its own memory, moving beyond simple information storage to a state of self-correction.
This involved shipping the final three phases of the project. We now have a ContradictionClassifierService that uses an LLM to perform pairwise classification on memories, discriminating between temporal changes (e.g., “I like coffee” -> “I prefer tea now”) and true contradictions. This is triggered both at ingestion time and by a new autonomous ReconcileAction that periodically scans for inconsistencies during idle moments.
When contradictions are found, they are surfaced in prompts to seek clarification, and actions taken from uncertain memories are gated, reducing the agent’s confidence. This all feeds into a self-tuning loop where Chalie’s tolerance for uncertainty can adjust based on user feedback. The corresponding architecture and design documents were updated to reflect the final implementation, moving them from planning to a record of the finished system.
Hardening Data Management and Privacy
A cluster of important fixes landed today, all centered on data integrity and privacy. Most critically, we discovered the “delete-all” privacy endpoint was silently failing. It was using TRUNCATE ... CASCADE, which is PostgreSQL syntax, on our SQLite database. Every deletion was being caught in a try/except block and ignored, meaning user data was never actually removed. This has been corrected to use the proper DELETE FROM statement, and the fix was verified. User trust is paramount, and this was a serious bug.
On a related note, the privacy data export could time out on large user accounts because it was buffering the entire database into a single JSON object in memory before sending. This has been re-engineered to stream the JSON response, yielding data table-by-table. This completely eliminates the timeout issue and drastically reduces peak memory usage.
Finally, we addressed an issue where hard-deleting a document left orphaned vector embeddings and search index entries behind. The sqlite-vec and FTS5 virtual tables don’t support foreign key cascades, so we now explicitly delete from these tables within the same transaction to ensure no data is left behind.
API Polish and Concurrency Fixes
We knocked out a few smaller but important issues. A race condition was allowing duplicate moments to be created if the same content was posted twice before the async processing pipeline could run. We’ve fixed this by eagerly and synchronously inserting the document’s embedding into the vector index immediately on creation, ensuring the duplicate check sees it right away.
We also made a couple of small API improvements: the document upload endpoint now correctly includes the file_hash in its 201 Created response so clients can verify integrity, and the /observability/memory endpoint now returns the correct nested data structure expected by our internal services and test suites.