v1.0 — Pure Rust — Patent Pending

Cognitive memory
for AI agents.

No LLM required. No embeddings. No cloud. pip install aura-memory, 2.7 MB binary, works offline.

python3
from aura import Aura, Level
brain = Aura("./my_agent")
# Store memories at different importance levels
brain.store("User prefers dark mode", level=Level.Identity)
brain.store("Deploy to staging first", level=Level.Decisions)
# Recall with RRF Fusion — <1ms, zero LLM calls
context = brain.recall("user preferences", token_budget=2000)
# Inject into any LLM system prompt — done.
0.09ms
Store latency
0.74ms
Recall latency
2.7MB
Binary size
$0
Cost per operation
Patented Architecture

Autonomous cognitive memory

DNA Layering, Cognitive Crystallization, SDR Indexing, and Synaptic Synthesis — a unified architecture that learns, consolidates, and evolves without external model retraining.

Patent Pending

DNA Memory Layers

Hierarchical memory partitioning: Identity (user_core) for permanent immutable data, Wisdom (super_core) for synthesized abstractions, Context (general) for transient high-velocity data.

Patent Pending

Cognitive Crystallization

Autonomous promotion of memories from transient to permanent layers based on semantic intensity, temporal frequency, and cross-contextual relevance. Includes Flash-Crystallization for safety-critical triggers.

Patent Pending

SDR Indexing

Deterministic O(k) search via Sparse Distributed Representations with Tanimoto similarity. Bitwise operations on 256K-bit vectors. Sub-millisecond recall, zero garbage collection pauses.

Patent Pending

Synaptic Synthesis

Semantic deduplication via SDR resonance. High-resonance entries (Tanimoto score ≥ 0.75) are merged into dense super-synapses, eliminating redundancy without data loss.

Encryption at Rest

ChaCha20-Poly1305 with Argon2id key derivation. Append-only binary storage ensures transactional data integrity and power-loss resilience across edge and cloud.

Trust & Provenance

Source authority scoring, provenance stamping, credibility tracking for 60+ domains. Auto-protect guards detect PII (phone, email, wallets, API keys) automatically.

How It Works

Cognitive Crystallization Process

From input to permanent memory — no LLM calls, no embedding API, no cloud. Pure deterministic computation in Rust.

01

Input Encoding

Text is converted into a Sparse Distributed Representation (SDR) — a 256K-bit vector via xxHash3. Deterministic, no neural model needed.

02

Anchor Check

Flash-Crystallization scans for safety-critical, emotional, or identity triggers. If detected, the record is immediately committed to user_core.

03

Resonance Search

Tanimoto similarity is computed against existing synapses via bitwise operations. O(k) complexity where k = active bits.

04

Store or Merge

Tanimoto > 0.75 triggers Synaptic Synthesis (merge into super-synapse). Tanimoto > 0.2 updates existing synapse. Below 0.2 creates new synapse in general layer.

05

Crystallization

Background process autonomously promotes memories from general to super_core to user_core based on semantic intensity, access frequency, and cross-contextual relevance.

06

Kinetic Decay

Low-stability records are pruned via entropy-weighted decay. Each DNA layer has its own retention rate. Power-loss resilient via append-only binary storage.

US Provisional Patent Application No. 63/969,703
Autonomous Dynamic Cognitive Memory Management System for Large (LLM) and Small (SLM) Language Models. The core architecture — DNA Layering, Cognitive Crystallization, SDR Indexing, and Synaptic Synthesis — is patent pending. Open-source SDK available under MIT License.
Comparison

How Aura compares

Most agent memory solutions require LLM calls for basic operations. Aura is pure local computation.

FeatureAuraMem0ZepLetta/MemGPT
LLM requiredNoYesYesYes
Embedding model requiredNoYesYesNo
Works fully offlinePartialWith local LLM
Cost per operation$0API billingCredit-basedLLM cost
Recall latency (1K records)<1ms~200ms+~200msLLM-bound
Binary size2.7 MB~50 MB+ (Python)Cloud service~50 MB+ (Python)
Memory lifecycle (decay/promote)Via LLMVia LLM
Trust & provenance
Encryption at restChaCha20
LanguageRustPythonProprietaryPython
Integration

Three lines to remember everything

Python SDK works with any LLM framework. Store, recall, done.

basic_usage.py
from aura import Aura, Level
brain = Aura("./data")
# Store into DNA Memory Layers
brain.store(
"User prefers dark mode",
level=Level.Identity, # user_core
tags=["preference"]
)
# SDR-powered recall — O(k) complexity
context = brain.recall(
"user preferences",
token_budget=2000
)
# === COGNITIVE CONTEXT ===
# [IDENTITY]
# - User prefers dark mode [preference]
ollama_agent.py
# Fully local AI with persistent memory
# No cloud. No API keys. Everything local.
brain = Aura("./agent_data")
# Get relevant context before LLM call
context = brain.recall(
user_message,
token_budget=2000
)
# Inject into Ollama system prompt
response = ollama.chat(
model="llama3",
messages=[
{"role": "system", "content": context},
{"role": "user", "content": msg}
]
)

DNA Memory Layers

Memories are partitioned into hierarchical layers with adaptive kinetic decay. Cognitive Crystallization promotes records upward automatically.

user_core0.99/cycle
SDK: Level.Identity

Permanent, immutable identity. User preferences, name, core traits. Protected from decay.

super_core0.90 – 0.95/cycle
SDK: Level.Domain / Decisions

Synthesized wisdom and generalizations. Learned facts, decisions, domain knowledge. Auto-promoted via Crystallization.

general0.80/cycle
SDK: Level.Working

Transient high-velocity context. Recent messages, current tasks. Decays naturally, promotes on access frequency.

Get Started

Install in one line

Python 3.9+. Pre-built wheels for Linux, macOS, and Windows. No compilation needed.

terminal
$ pip install aura-memory
Linux x64macOS (Intel & Apple Silicon)Windows x64
MIT License · Patent Pending (US 63/969,703) · Pure Rust · 2.7 MB
Built in Ukraine