April 30, 2026
Stronger LLM dispatch via subagent rewrite
The initial SUMMARY for subagents focused on mechanisms (fire-and-forget, wait), which weakened semantic anchoring for "background research" prompts compared to
The initial SUMMARY for subagents focused on mechanisms (fire-and-forget, wait), which weakened semantic anchoring for “background research” prompts compared to the goal_pursuit model.
A pre-critic run indicated an LLM tendency to respond inline (e.g., “I’ll set a subagent for you”) instead of properly dispatching the tool, leading to a WARN.
The SUMMARY is now reframed using explicit use cases: long-running tasks, large data traversal, parallelisation, context bloat reduction, document summarisation, and multi-page search.
Guidance was added to explicitly tell the LLM to launch as many bounded subagents as required for a task.
The wait description was rewritten into plain LLM-facing terms: “Set this to true to make the request synchronous. Request will be hard capped to 5 minutes, else leave it False and you will be notified automatically once the request is completed.”.
abilities.sqlite and abilities_sha.json were regenerated to embed the new SUMMARY, which serves as a CI sha-pin gate.
-
Rewrote SUMMARY for subagents to improve semantic anchoring for “background research” prompts.
-
Replaced mechanism focus in SUMMARY with use-case framing (e.g., parallelisation, document summarisation).
-
Updated wait description to be in plain LLM-facing terms, detailing the 5-minute synchronous cap.
-
Explicitly guided the LLM to launch as many bounded subagents as necessary.