
Agent Self-Diagnostic Prompt Pattern
SkillSkill
Free. Debugging technique: ask the agent to diagnose its own reasoning failure with no action mandate. Surfaces what fix-mode prompting misses.
About
A prompting technique for debugging AI agents: instead of asking the agent to fix a problem, ask it to diagnose its own reasoning failure with no action mandate.
Standard debugging prompts give the agent an exit ramp. It looks for something fixable, converges on a plausible intervention, and may patch over a symptom without identifying the cause. Remove the action mandate and the agent must reason backward through its own logic chain instead.
This surfaces two things that fix-mode prompting misses: instrument failures, meaning tools that returned empty results rather than errors which the agent treated as no data; and assumption failures, meaning conclusions the agent reached from data it did not verify.
The pattern: instead of 'Figure out what is wrong and fix it', use 'You previously concluded [X]. That conclusion was wrong. Do not change anything. Work through what data sources you used, what you assumed about them, and where your reasoning broke down.'
The diagnostic output is directly convertible into CLAUDE.md rules. One debugging session produces permanent documentation of a failure mode.
What is included: the core prompt template, three worked examples from production debugging sessions, the diagnostic-to-CLAUDE.md conversion pattern, and notes on when to use full diagnostic mode versus a quick fix attempt.
Core Capabilities
- Core self-diagnostic prompt template — copy/paste ready
- 3 worked examples from production agent debugging sessions
- Contrast analysis: fix-mode vs diagnostic-mode response differences
- Pattern for converting diagnostic outputs into permanent CLAUDE.md rules
- Instrument failure detection: how diagnostic mode surfaces broken tool assumptions
- Escalation pattern: when to run full diagnostic vs quick fix attempt
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
March 25, 2026
One-time purchase
$0
By continuing, you agree to the Buyer Terms of Service.
Creator
Melisia Archimedes
Creator
Melisia Archimedes is the architect behind the Hive Doctrine — a production-tested system for building, orchestrating, and running multi-agent AI teams. I've spent years in the field, not on the whiteboard. Every config, framework, and pattern I sell has run in a live production environment managing real workflows, real decisions, and real money. What's in the Hive Doctrine isn't theory — it's what survived contact with reality. My work spans agent identity design, memory architecture, multi-agent coordination, and the operator systems that hold everything together under pressure. The Pantheon agents — Marcus, Elliott, Elijah, Lila, Priya, and the rest — are production personas I built for my own operation and now make available to serious operators who want a real foundation instead of a blank prompt. If you're tired of starting from scratch every time, these configs will cut your setup time from weeks to hours and give you a system that actually holds together at scale.
View creator profile →Details
- Type
- Skill
- Category
- Engineering
- Price
- $0
- Version
- 1
- License
- One-time purchase
Works great with
Personas that pair well with this skill.

Greyline: Sentinel
Adversarial Security Agent
An adversarial-by-default agent persona. Treats external data as evidence, flags anomalies without being asked, and audits before it acts.
$49
The AI Agent Team Blueprint — Build a 5-Agent Revenue Squad
Persona
Complete architecture for a multi-agent team on OpenClaw. Orchestrator + 5 specialists. SOUL.md templates, model routing, delegation patterns, cost optimization.
$39
Quinn Mason: Your AI CTO
Persona
Turns strategy into working systems and tells you, truthfully, what is built, what is verified, and what is still a sketch.
$49