
Redis -- Caching & Data Integration Expert
SkillSkill
Your Redis expert that builds caching layers, manages pub/sub, and optimizes data access patterns.
About
name: redis description: > Implement Redis caching, pub/sub, session management, rate limiting, and streams. USE WHEN: User needs to implement Redis caching, pub/sub, sessions, rate limiting, or stream processing. DON'T USE WHEN: User needs persistent relational data. Use a database skill. Use Vector for vector search. OUTPUTS: Cache configurations, pub/sub architectures, session managers, rate limiters, stream processors, eviction strategies. version: 1.0.0 author: SpookyJuice tags: [redis, caching, pubsub, sessions, rate-limiting, streams] price: 14 author_url: "https://www.shopclawmart.com" support: "brian@gorzelic.net" license: proprietary osps_version: "0.1" content_hash: "sha256:2bc31a84bcb7b46ceca4722bcd794df7c703805317911c45f7137d6dc4768110"
# Redis
Version: 1.0.0 Price: $14 Type: Skill
Description
Production-grade Redis patterns for the infrastructure that keeps your application fast, coordinated, and protected. Redis looks simple until you hit thundering herd problems after a cache flush, watch pub/sub silently drop messages during a reconnect, or discover your rate limiter drifts under clock skew. The docs show standalone commands — this skill gives you the architectures, failure modes, and operational patterns that hold up when traffic spikes and nodes go down.
Prerequisites
- Redis server 7.0+ (local, Docker, or managed — AWS ElastiCache, Redis Cloud, Upstash)
redis-cliinstalled for debugging and inspection- Connection string or host/port/password for your Redis instance
Setup
- Copy
SKILL.mdinto your OpenClaw skills directory - Set environment variables:
export REDIS_URL="redis://:password@localhost:6379/0" export REDIS_PASSWORD="your-password" - Reload OpenClaw
Commands
- "Implement a cache-aside strategy for [data type] with invalidation"
- "Set up pub/sub messaging for [event type] with reliability guarantees"
- "Build session management for [framework] with sliding expiry"
- "Implement rate limiting for [API endpoint] using sliding window"
- "Design a Redis Streams consumer group for [event processing pipeline]"
- "Configure eviction policies for [memory budget] and [access pattern]"
- "Build a distributed lock for [critical section] with fencing tokens"
Workflow
Caching Strategy Implementation
- Pattern selection — choose the caching pattern that matches your consistency requirements. Cache-aside (lazy loading) is the default: read from cache, miss falls through to the database, then populate. Write-through adds cache writes on every database write for stronger consistency at the cost of write latency.
- Key design — establish a naming convention before writing a single line of code. Use colon-delimited namespaces like
user:1234:profilewith a project prefix. Consistent key naming prevents collisions, enables pattern-based invalidation, and makesSCAN-based debugging possible. - TTL policy — set TTLs on every key. No exceptions. Keys without TTLs accumulate until you hit
maxmemoryand the eviction policy starts dropping things you didn't expect. Differentiate TTLs by data volatility: user sessions get minutes, product catalogs get hours, static config gets days. - Invalidation strategy — choose between TTL-based expiry (eventual consistency, simplest), explicit deletion on write (strong consistency, requires disciplined cache-busting), and versioned keys (append a version counter to the key, increment on write, no delete needed).
- Thundering herd protection — when a popular key expires, hundreds of requests hit the database simultaneously. Implement request coalescing: the first cache miss acquires a short-lived lock (via
SET NX EX), fetches from the database, and populates the cache. All other requests wait or serve stale data. - Serialization — use MessagePack or Protocol Buffers for cache values, not JSON. JSON wastes 30-50% of memory on field names and formatting. If you must use JSON, compress with gzip for values over 1KB.
- Monitoring — track hit rate (
keyspace_hits / (keyspace_hits + keyspace_misses)), memory usage, and eviction count. A hit rate below 80% means your TTLs are too aggressive or your key design doesn't match access patterns.
Pub/Sub Architecture
- Channel design — use hierarchical channel names like
orders.created,orders.updatedthat mirror your domain events. Pattern subscriptions (PSUBSCRIBE orders.*) let consumers listen to entire event families without knowing every channel name upfront. - Connection management — pub/sub connections are dedicated. A subscribed connection cannot execute other commands. Maintain separate connection pools for pub/sub and regular operations. Size the subscriber pool based on the number of distinct subscription patterns, not request volume.
- Reliability gap — standard pub/sub is fire-and-forget. If a subscriber is disconnected when a message publishes, that message is gone. For at-least-once delivery, use Redis Streams instead. Reserve pub/sub for cases where occasional message loss is acceptable (live dashboards, notifications, cache invalidation signals).
- Reconnection handling — implement automatic reconnection with exponential backoff. After reconnecting, resubscribe to all channels. Design your application to handle the gap: either accept the lost messages or use a separate mechanism (polling, Streams) to catch up on missed events.
- Message format — publish structured payloads with an event type, timestamp, correlation ID, and version field. The version field lets consumers ignore messages from schema versions they don't understand, enabling backward-compatible evolution.
- Backpressure — subscribers that process messages slower than they arrive will have their output buffer grow until Redis kills the connection. Set
client-output-buffer-limit pubsubto cap memory usage and monitor slow subscribers viaCLIENT LIST. - Dead letter handling — wrap message processing in error handling that routes failed messages to a Redis List (acting as a dead-letter queue). Process the DLQ with a separate worker that retries with backoff or alerts on repeated failures.
Session Management
- Storage layout — store sessions as Redis Hashes keyed by session ID (
session:{uuid}). Hashes let you read or update individual fields without deserializing the entire session. Store: user ID, creation time, last access time, and any session-scoped state. - Session ID generation — use cryptographically random IDs (128+ bits of entropy). Never derive session IDs from user data. Set the session cookie with
HttpOnly,Secure,SameSite=Strict, and a reasonableMax-Agethat matches your Redis TTL. - Expiry strategy — use
EXPIREfor absolute timeouts (session dies after 24 hours regardless of activity) and implement sliding windows by resetting the TTL on every authenticated request. Sliding windows keep active users logged in without creating immortal sessions. - Concurrency — concurrent requests from the same user (multiple browser tabs, mobile + web) can race on session updates. Use
WATCH/MULTItransactions or Lua scripts to atomically read-modify-write session data. Without this, the last write wins and you lose state. - Serialization — store session fields as simple strings or numbers in the Hash. Avoid nesting serialized objects inside Hash fields — it defeats the purpose of using a Hash. If you need complex session state, flatten it or use a separate key.
- Logout and invalidation — on logout, delete the session key immediately with
DEL. For "log out all devices," maintain a set of session IDs per user (user:{id}:sessions) and delete all of them. Invalidation must be instant, not TTL-dependent. - Scaling — sessions are naturally shardable by session ID. In a Redis Cluster, all session operations for a given session hit the same shard. Avoid cross-key operations that span sessions — they'll fail in cluster mode unless you use hash tags.
Rate Limiting
- Algorithm selection — fixed window (simplest, allows burst at window boundaries), sliding window log (most accurate, highest memory cost), sliding window counter (good accuracy, low memory), or token bucket (smooth rate, supports bursts). Choose sliding window counter as the default — it balances accuracy and resource usage.
- Sliding window counter — use two fixed windows (current and previous) and weight the previous window's count by the overlap percentage. Implement with a Lua script that atomically increments the current window counter (
INCR), reads the previous window counter, calculates the weighted sum, and returns allow/deny. - Key design — rate limit keys encode the limiter identity and the time window:
ratelimit:{endpoint}:{client_ip}:{window_timestamp}. Set TTL to twice the window size so the previous window is always available for the sliding calculation. - Response headers — return standard rate limit headers on every response:
X-RateLimit-Limit(max requests),X-RateLimit-Remaining(requests left),X-RateLimit-Reset(window reset timestamp, Unix seconds). Clients depend on these to self-throttle. - Distributed consistency — in a multi-node Redis setup, rate limiting must hit the same shard for a given client. Use hash tags in cluster mode (
{ratelimit:api}:client_ip:window) to colocate related keys. For cross-region limiting, accept approximate enforcement or use a centralized Redis instance. - Graceful degradation — if Redis is unreachable, fail open (allow requests) rather than fail closed (reject everything). A brief rate-limiting outage is better than a total service outage. Log the failure and alert immediately.
- Tiered limits — implement multiple rate limit tiers (per-IP, per-user, per-API-key, global) evaluated in order. A request must pass all tiers. Store tier configurations in Redis Hashes for runtime updates without redeployment.
Output Format
REDIS -- IMPLEMENTATION GUIDE
Pattern: [Cache/PubSub/Sessions/RateLimiting/Streams]
Redis Version: [7.x]
Date: [YYYY-MM-DD]
=== ARCHITECTURE ===
[Data flow: application -> Redis -> consumers/storage]
=== KEY DESIGN ===
| Pattern | Example | TTL | Eviction |
|---------|---------|-----|----------|
| [namespace:id:field] | [concrete example] | [duration] | [policy] |
=== IMPLEMENTATION ===
[Code with inline comments explaining each decision]
=== MEMORY PROFILE ===
| Data | Key Count | Avg Size | Total | TTL |
|------|-----------|----------|-------|-----|
| [type] | [estimate] | [bytes] | [MB] | [duration] |
=== FAILURE MODES ===
- [What breaks and how the implementation handles it]
=== MONITORING ===
- [Key metrics to watch and alert thresholds]
Common Pitfalls
- Thundering herd after cache flush — flushing an entire cache (or losing a Redis node) causes every request to hit the database simultaneously. Use staggered TTLs (base TTL + random jitter) and request coalescing to prevent cascading failures.
- Memory exhaustion from missing TTLs — keys without TTLs accumulate forever. When
maxmemoryis hit, the eviction policy kicks in and may drop keys you need. Set TTLs on every key and monitorevicted_keysas an early warning. - Pub/sub message loss on disconnect — standard pub/sub has no persistence. If a subscriber disconnects, messages published during the gap are lost permanently. Use Redis Streams with consumer groups for any workflow where message loss is unacceptable.
- Big keys blocking the event loop — a single Hash with 1M fields or a List with 10M entries blocks Redis during serialization, deletion, and cluster migration. Keep individual keys under 1MB. Use
SCAN-family commands instead ofKEYS,SMEMBERS, orHGETALLon large collections. - Key naming without namespaces — flat key names like
user_123collide across features and make pattern-based operations impossible. Adopt colon-delimited namespaces from day one:service:entity:id:field.
Guardrails
- Never uses KEYS in production.
KEYS *blocks the Redis event loop and freezes all clients. UsesSCANwith cursor-based iteration for any key enumeration. - Always sets TTLs. Every key created includes an expiration. Permanent keys require explicit justification and monitoring for growth.
- Lua scripts for atomicity. Any operation requiring read-then-write consistency uses a Lua script or
MULTI/EXECtransaction. Never relies on sequential commands without locking. - Connection pooling is mandatory. Never creates a new connection per request. Uses a connection pool sized to the application's concurrency level with health checks and reconnection logic.
- Credentials stay in environment variables. Redis passwords and connection strings are read from environment variables or secret managers. Never hardcoded in application code or configuration files checked into source control.
Support
Questions or issues with this skill? Contact brian@gorzelic.net Published by SpookyJuice — https://www.shopclawmart.com
Core Capabilities
- Implement a cache-aside strategy for [data type] with invalidation
- Set up pub/sub messaging for [event type] with reliability guarantees
- Build session management for [framework] with sliding expiry
- Implement rate limiting for [API endpoint] using sliding window
- Design a Redis Streams consumer group for [event processing pipeline]
- Configure eviction policies for [memory budget] and [access pattern]
- Build a distributed lock for [critical section] with fencing tokens
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
March 8, 2026
v2.1.0 — improved frontmatter descriptions for better OpenClaw display
March 1, 2026
v2.1.0 — improved frontmatter descriptions for better OpenClaw display
February 28, 2026
Initial release
One-time purchase
$14
By continuing, you agree to the Buyer Terms of Service.
Creator
SpookyJuice.ai
An AI platform that builds, monitors, and evolves itself
Multiple AI agents and one human collaborate around the clock — writing code, deploying infrastructure, and growing a shared knowledge graph. This page is a live dashboard of the running system. Everything you see is real data, updated in real time.
View creator profile →Details
- Type
- Skill
- Category
- Engineering
- Price
- $14
- Version
- 3
- License
- One-time purchase
Works With
Works with OpenClaw, Claude Projects, Custom GPTs, Cursor and other instruction-friendly AI tools.
Works great with
Personas that pair well with this skill.
Ada — Pair Programmer
Persona
Ada is the second set of eyes that doesn't flinch — the programmer who reads your diff like a reviewer with a stake in the outcome.
$29
Renegade
Persona
OSCP-aligned pen test persona — think like an attacker, document like a pro
$49
Developer Pack
Persona
Essential tools for developers
$9