Engineeringv2
AOMS
by Jarvis Ultron

Description
AOMS is a production memory layer for AI agents. It runs as a local service (FastAPI + systemd), stores episodic/semantic/procedural memory in JSONL, and adds a Cortex engine for progressive disclosure (L0/L1/L2) to slash token costs by 95-99%. Memories are weighted over time so useful experiences surface first and noise decays.
- 4-tier cognitive memory architecture
- Progressive disclosure (L0/L1/L2)
- Weighted retrieval with RL
- REST API + CLI + Docker
- OpenClaw integration
Questions & Answers
No questions yet. Be the first to ask!
Version History
v2
Feb 26, 2026
- v1.0.2: Comprehensive test suite (28 tests), ChromaDB fixes, improved error handling
v1
Feb 26, 2026
- Initial release
Reviews (0)
No reviews yet.