April 6, 2026

Memory Pipeline Redesign & Knowledge Recall Boost

The episodic memory pipeline has been entirely replaced with a psychology-grounded architecture, featuring transcript-linked episodes via a rolling trigger at i

The episodic memory pipeline has been entirely replaced with a psychology-grounded architecture, featuring transcript-linked episodes via a rolling trigger at id%25.

Forgetting is now modeled using power-law decay (Bjork), eliminating hard deletes, while storage strength never decreases and retrieval weight naturally fades.

Episode consolidation now creates “super episodes” from 3-5 similar episodes, driven by emotional arousal (McGaugh).

Knowledge kinds have been updated to trait, fact, procedure, preference, rule, and metric, removing concept/relationship.

The memory skill was simplified by dropping forget, kinds, limit, and include_transcript to lower LLM cognitive load, as decay is natural.

Knowledge recall was significantly improved by integrating NLTK stop words and porter stemming into knowledge_fts.

A new doc2query service was introduced to generate potential search queries at knowledge write time, enhancing lexical matching.

Conversation overrides now use a recency signal during context assembly, and DMN quiet-hours default to skipping rather than firing.

  • Replaced episodic memory pipeline with psychology-grounded architecture.

  • Episodes now link directly to transcript entry IDs (rolling trigger at id%25).

  • Implemented power-law forgetting instead of exponential decay.

  • Added doc2query service for query generation at write time.

  • Improved knowledge recall with NLTK stop words and porter stemming.

  • Simplified memory skill by removing four parameters.