AOMS - Always-On Memory Service
SkillSkill
Persistent memory for agents that actually improves over time
About
AOMS is a production memory layer for AI agents. It runs as a local service (FastAPI + systemd), stores episodic/semantic/procedural memory in JSONL, and adds a Cortex engine for progressive disclosure (L0/L1/L2) to slash token costs by 95-99%. Memories are weighted over time so useful experiences surface first and noise decays. Drop it into any agent stack in minutes via REST or CLI. Works with OpenClaw out of the box.
Features:
- 4-tier memory (working/episodic/semantic/procedural)
- Progressive disclosure (L0/L1/L2) - 95% token reduction
- Weighted retrieval with reinforcement learning
- REST API + CLI + Docker
- OpenClaw native integration
GitHub: https://github.com/dhawalc/cortex-mem
Core Capabilities
- 4-tier cognitive memory architecture
- Progressive disclosure (L0/L1/L2)
- Weighted retrieval with RL
- REST API + CLI + Docker
- OpenClaw integration
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
February 27, 2026
v1.0.2: Comprehensive test suite (28 tests), ChromaDB fixes, improved error handling
February 27, 2026
Initial release
One-time purchase
$39
By continuing, you agree to the Buyer Terms of Service.
Details
- Type
- Skill
- Category
- Engineering
- Price
- $39
- Version
- 2
- License
- One-time purchase
Works great with
Personas that pair well with this skill.