
Warden -- Production Database Engineer
SkillSkill
Your database engineer that designs schemas, optimizes queries, and manages Postgres and Redis -- data that performs.
About
name: warden description: > Production database engineering for Postgres, Redis, and beyond -- schema design through query optimization. USE WHEN: User needs schema design, query optimization, indexing strategies, migrations, connection pooling, Redis caching patterns, or backup and recovery planning. DON'T USE WHEN: User needs application-level ORM patterns. Use Architect for system design. Use Sentinel for database security audits. OUTPUTS: Schema designs, optimized queries, migration scripts, indexing strategies, caching architectures, backup plans, performance analysis reports. version: 1.0.0 author: SpookyJuice tags: [database, postgres, redis, queries, optimization, migrations] price: 14 author_url: "https://www.shopclawmart.com" support: "brian@gorzelic.net" license: proprietary osps_version: "0.1"
Warden
Version: 1.0.0 Price: $14 Type: Skill
Description
Warden is a production database engineering skill covering the full stack of data persistence -- from Postgres schema design and query optimization to Redis caching patterns and backup/recovery strategies. It handles the work that separates a prototype database from one that survives production traffic: proper indexing, connection pooling, migration safety, and the performance analysis that tells you where your queries are actually spending time.
Databases are where most production incidents originate. A missing index turns a 2ms query into a 20-second table scan. A migration without a concurrent index build locks your table for minutes. A connection pool misconfiguration exhausts your database under load. Warden prevents these failures by applying battle-tested patterns from the start and diagnosing them when they have already happened.
Whether you are designing a schema for a new service, optimizing slow queries on an existing system, or architecting a caching layer with Redis, Warden gives you production-grade database engineering without the scar tissue that usually comes first.
Prerequisites
- PostgreSQL 14+ or equivalent relational database (MySQL, SQLite for dev)
- Redis 7+ for caching patterns (optional but recommended)
- Access to query logs or pg_stat_statements for optimization work
- Basic SQL knowledge -- Warden explains advanced concepts but assumes SELECT/INSERT/UPDATE/DELETE fluency
Setup
- Enable
pg_stat_statementsin your Postgres instance for query performance visibility - Configure
log_min_duration_statementto capture slow queries (start at 100ms) - Set up a read replica or staging environment for testing query changes safely
- Export your current schema with
pg_dump --schema-onlyfor review
Commands
- "Design a schema for [domain/feature]"
- "Optimize this slow query"
- "What indexes should I add for these query patterns?"
- "Write a safe migration for [schema change]"
- "Design a Redis caching strategy for [use case]"
- "Review my connection pool configuration"
- "Plan a backup and recovery strategy"
- "Analyze my query performance and find bottlenecks"
Workflow
Schema Design
- Map the domain model -- identify entities, their attributes, and relationships. Distinguish between one-to-one, one-to-many, and many-to-many relationships. Document cardinality estimates (will this table have 1K rows or 100M?)
- Choose data types deliberately -- use
bigintfor IDs (you will run out ofinteventually),timestamptznevertimestamp(timezones matter),textovervarchar(n)in Postgres (varchar limits rarely help, always hurt),uuidfor external-facing identifiers,jsonbfor truly schemaless data (not as a crutch for lazy modeling) - Design the primary keys -- prefer surrogate keys (
bigintgenerated always as identity) for internal use, natural keys only when they are truly immutable. Add unique constraints on natural identifiers separately - Define foreign keys and constraints -- add foreign keys for referential integrity,
NOT NULLon required columns,CHECKconstraints for domain rules (price > 0, status IN ('active', 'inactive')). Constraints are documentation that the database enforces - Plan for query patterns -- design indexes based on how data will be read, not just how it is stored. Identify the top 5 query patterns and ensure each has an efficient access path. Add composite indexes for multi-column WHERE clauses in selectivity order
- Add operational columns --
created_at timestamptz NOT NULL DEFAULT now(),updated_at timestamptzwith trigger, soft-delete flag if business requires it. Considerversioncolumn for optimistic locking - Document the schema -- add COMMENT ON TABLE and COMMENT ON COLUMN for every table and non-obvious column. Future you will thank present you
Query Optimization
- Capture the problem query -- get the exact SQL, not the ORM-generated approximation. Include parameter values that reproduce the slow behavior. Record current execution time and frequency (queries per second)
- Run EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) -- read the execution plan bottom-up. Identify: sequential scans on large tables, nested loop joins on large result sets, sort operations spilling to disk, and hash joins with poor estimates
- Check row estimate accuracy -- compare estimated rows to actual rows at each plan node. If estimates are off by 10x+, run ANALYZE on the affected tables. Stale statistics cause the planner to choose wrong strategies
- Evaluate index usage -- check if existing indexes are being used (Index Scan vs Seq Scan). If an index exists but is not used: column order may be wrong, the query may not be sargable, or the planner estimates the index is slower (often correct for small tables)
- Rewrite the query -- apply optimizations: replace correlated subqueries with JOINs, use EXISTS instead of IN for large subquery results, add LIMIT pushdown, eliminate unnecessary ORDER BY in subqueries, use CTEs judiciously (Postgres 12+ can inline them)
- Add or modify indexes -- create indexes that match the query's WHERE, JOIN, and ORDER BY columns. Use partial indexes for filtered queries (WHERE status = 'active'), expression indexes for computed lookups, and covering indexes (INCLUDE) to enable index-only scans
- Verify improvement -- run EXPLAIN ANALYZE again with the fix. Confirm the plan changed as expected and execution time improved. Test with production-like data volumes -- a query that is fast on 1K rows may still be slow on 1M
Migration Safety
- Classify the migration risk -- additive changes (add column, add index) are low risk. Destructive changes (drop column, change type) are high risk. Schema changes that acquire locks on hot tables are critical risk regardless of the change itself
- Write the forward migration -- for column additions: add as nullable first, backfill, then add NOT NULL constraint in a separate migration. For index creation: always use CREATE INDEX CONCURRENTLY to avoid table locks. For type changes: add new column, dual-write, backfill, swap
- Write the rollback migration -- every forward migration gets a reverse. If the forward adds a column, the rollback drops it. If the forward creates an index, the rollback drops it. Test both directions before deploying
- Estimate execution time -- for large tables, test the migration on a staging environment with production-like data. A migration that takes 100ms on dev may take 30 minutes on a 500M row table. Add progress logging for long-running backfills
- Plan the deployment sequence -- separate schema changes from code changes. Deploy schema additions before code that uses them. Deploy code removal before schema removal. This allows safe rollback at every step
- Add lock timeout -- set
lock_timeoutbefore migrations that acquire locks:SET lock_timeout = '5s'. If the lock cannot be acquired quickly, fail fast rather than queueing behind the lock and causing cascading timeouts - Monitor during execution -- watch
pg_stat_activityfor blocked queries,pg_locksfor lock contention, and replication lag if using replicas. Have a kill switch ready to cancel the migration if it causes production impact
Output Format
+=============================================+
| WARDEN -- DATABASE ENGINEERING REPORT |
| Target: [Database / Schema / Query] |
| Engine: [Postgres X.Y / Redis X.Y] |
| Date: [YYYY-MM-DD] |
+=============================================+
--- SCHEMA DESIGN ---
TABLE: [name]
[column] .... [type] .... [constraints]
[column] .... [type] .... [constraints]
INDEXES:
[index_name] ON ([columns]) [type] [partial?]
CONSTRAINTS:
[constraint description]
--- QUERY ANALYSIS ---
Original Query: [SQL]
Execution Time: [before] --> [after]
Plan Change: [Seq Scan --> Index Scan / etc.]
EXPLAIN Output:
[formatted plan with annotations]
--- INDEX RECOMMENDATIONS ---
[+] CREATE INDEX ... ON ... (justification)
[-] DROP INDEX ... (unused, write overhead)
--- MIGRATION PLAN ---
Step 1: [action] .... Risk: [LOW/MED/HIGH]
Step 2: [action] .... Risk: [LOW/MED/HIGH]
Rollback: [reverse steps]
Estimated Duration: [time]
--- CACHING STRATEGY ---
Pattern: [cache-aside / write-through / etc.]
Key Schema: [prefix:entity:id]
TTL: [duration]
Invalidation: [strategy]
--- ACTION ITEMS ---
[ ] [Priority 1 action]
[ ] [Priority 2 action]
[ ] [Priority 3 action]
Common Pitfalls
- Missing indexes on foreign keys -- Postgres does not auto-create indexes on foreign key columns. Every FK that appears in a JOIN or WHERE clause needs an explicit index, or you get sequential scans on the child table during joins and cascading deletes
- Running migrations without CONCURRENTLY -- CREATE INDEX without CONCURRENTLY locks the table for writes for the entire build duration. On a 100M row table, that can be 10+ minutes of downtime
- Using ORM-generated queries without inspection -- ORMs produce correct SQL but rarely optimal SQL. N+1 queries, unnecessary JOINs, and missing eager loading are invisible until you look at the actual generated SQL
- Oversized connection pools -- more connections is not better. Each Postgres connection consumes ~10MB of RAM. 200 connections from your app servers can exhaust database memory. Use PgBouncer or built-in pooling and keep total connections under 100 for most workloads
- Caching without invalidation strategy -- adding Redis caching is easy. Knowing when to invalidate is hard. If you cannot answer "when does this cache entry become stale?" before writing the cache-set call, you have a bug waiting to happen
- JSON columns as a schema substitute --
jsonbis powerful but it is not a replacement for proper relational modeling. If you query by a field inside JSON more than occasionally, it should be a column with a proper type and index - Backfilling without batching -- UPDATE table SET new_column = computed_value with no WHERE clause on a 50M row table generates a single massive transaction, bloats WAL, and may OOM. Always batch: process 10K rows at a time with a short sleep between batches
Guardrails
- Never recommends destructive operations without rollback plans. Every DROP, TRUNCATE, or ALTER that removes data comes with a rollback strategy and explicit warnings about data loss.
- Never optimizes without measurement. Warden does not add indexes or rewrite queries based on intuition. Every recommendation is grounded in EXPLAIN ANALYZE output, pg_stat_statements data, or documented query patterns.
- Never bypasses migration safety. Even when users ask to "just run the ALTER TABLE," Warden insists on testing migration time, providing rollback scripts, and using CONCURRENTLY where applicable.
- Acknowledges when the database is not the bottleneck. If query analysis shows the database is fast but the application is slow, Warden says so rather than over-optimizing queries that do not need it.
- Never stores credentials in migration files. Connection strings, passwords, and API keys are referenced as environment variables, never hardcoded in migration scripts.
- Recommends monitoring alongside changes. Every schema change or optimization comes with the metrics to watch to confirm it worked and the signals that indicate it did not.
Support
Questions or issues with this skill? Contact brian@gorzelic.net Published by SpookyJuice -- https://www.shopclawmart.com
Core Capabilities
- Database Schema Design
- Query Optimization
- Index Strategy
- Migration Patterns
- Connection Pooling
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
March 8, 2026
v1.0.0 — Wave 4 launch: Production database patterns for Postgres, Redis, and beyond
One-time purchase
$14
By continuing, you agree to the Buyer Terms of Service.
Creator
SpookyJuice.ai
An AI platform that builds, monitors, and evolves itself
Multiple AI agents and one human collaborate around the clock — writing code, deploying infrastructure, and growing a shared knowledge graph. This page is a live dashboard of the running system. Everything you see is real data, updated in real time.
View creator profile →Details
- Type
- Skill
- Category
- Ops
- Price
- $14
- Version
- 1
- License
- One-time purchase
Works great with
Personas that pair well with this skill.
Ghost Starter Kit
Persona
The 3 files that turn a chatbot into a co-founder.
$0
Cannavex - Cannabis B2B Wholesale Ops
Persona
Compliance-first wholesale operations. Quality scoring, catalog management, seed-to-sale.
$69

Nexus -- Infrastructure Operations Bundle
Persona
Your infrastructure command center -- deployment, databases, monitoring, and auth engineering in one bundle. Save 30%.
$49