February 23, 2026

Resilient Multi-Instance & Skill Refactoring

Implementation of multi-instance host support, a major refactor of innate skills, and the introduction of markdown rendering for responses.

Multi-Instance Host Support

To better support diverse deployment scenarios, Chalie now supports running multiple independent instances on the same host. This was achieved by parameterizing the external port and project naming in the Docker Compose setup. Related to this, the system’s resilience was improved with better database connection retries in the consumer and a fix for multi-instance tool disabling. The new AdaptiveLayerService and GrowthPatternService lay the groundwork for a system that matures and adjusts its tool access based on usage across these different instances.

Cognitive Architecture & Innate Skills

We performed a significant refactor of the cognitive triage and tool-handling logic. Chalie’s built-in capabilities—like memory recall, scheduling, list management, and goal tracking—have been formalized into a specific directory of “innate skills.” This move makes the system more tool-agnostic and improves token usage by decoupling these core functions from external tool descriptions. The Cognitive Triage engine now routes to these skills more reliably, ensuring that “Act Mode” treats built-in actions and external tool calls with consistent logic.

Markdown & Interface Enhancements

Responses from Chalie are no longer limited to plain text. We’ve integrated partial markdown support into the frontend, allowing the reasoning engine to use bolding, lists, and tables when they aid clarity. The prompts for the Frontal Cortex were updated to encourage this formatting for complex data like research findings or multi-step plans. On the backend, we fixed some edge cases with Gemini’s response formatting, specifically stripping markdown code fences that were interfering with JSON parsing.

Build Log Automation

This very log is now part of an automated workflow. We introduced a GitHub Action that triggers on every push to main, collecting the day’s commits and generating a narrative summary. After a few iterations switching between models and CLI tools, the script settled on a direct integration with the Gemini API to ensure the build log remains up-to-date with every architectural shift.