March 7, 2026
Stability and Intelligence Updates
60-second timeouts were added to all vision LLM OCR calls across Anthropic, OpenAI, and Gemini to prevent long-running pipeline hangs
60-second timeouts were added to all vision LLM OCR calls across Anthropic, OpenAI, and Gemini to prevent long-running pipeline hangs. A centralized service now handles vision provider resolution, utilizing the existing provider cache.
The moments table schema was dropped as moments are now consistently stored as documents, removing a dead database structure.
Provider retrieval logic was corrected to return a 404 when attempting to access a soft-deleted provider, aligning with other read methods.
Triage calibration events failures were fixed by ensuring the id column with a generated UUID is present in INSERT statements.
A new behavior was introduced where the assistant asks a specific follow-up question when the user volunteers personal information, enhancing conversational depth.
The UI dock was visually refined by removing backgrounds from the mic, upload, and send buttons and correcting padding to ensure symmetry.
Input dock icon alignment on desktop was fixed by adjusting flexbox properties and removing manual negative margins.
Routing logic errors causing NULLs for routing_time_ms in audit tables were resolved for both social pre-checks and cognitive reflex paths.
The ACT loop persisted iteration records by adding the missing id column to the INSERT statement, preventing silent failures.
Episodic memory search consistency was improved by removing an ambiguous alias in the FTS5 query, using full table names instead.
The /ready endpoint response structure was updated to provide granular, top-level status objects for each subsystem instead of nesting them under a ‘checks’ key.
Self-awareness and effort-aware subsystems were implemented, allowing workers to dynamically skip tasks based on memory richness.
The working memory management was overhauled by replacing queue polling with a 60s cycle EpisodicMemoryObserver for periodic consolidation.
Ambient Tool Intelligence and a foundational SelfModelService were merged, enabling proactive tool invocation and capability gap learning.
Actions requiring user input from skills are now returned as deterministic, clickable buttons in chat responses, bypassing complex routing.
The entire codebase was deep-cleaned to remove all instances of user_id, reflecting the single-user system architecture across services and migrations.
-
Vision LLM OCR calls now have 60s timeouts to prevent hangs.
-
SelfModelService introduces continuous interoception and capability gap learning.
-
Working memory consolidation is now managed by a periodic 60s cycle observer.
-
Conversational Instincts now asks follow-up questions upon personal information disclosure.
-
user_idwas removed from the codebase across 7 database tables and services. -
Deterministic action buttons allow skills to prompt users directly in the chat UI.