Every AI agent you’ve ever used has the same dirty secret: their memory is unverifiable.

When ChatGPT “remembers” something about you, can you:

  • Prove when it learned that fact?
  • See how it’s evolved over time?
  • Verify it hasn’t been tampered with?
  • Delete it completely?

No. None of that. It’s a black box.

And if you’re building AI agents for production — agents that make decisions about your business, your data, your money — this isn’t just inconvenient. It’s a liability.


The Threat Model

Three real threats to AI memory systems:

1. Memory Poisoning

A malicious prompt injects false “memories” into your agent. Now every future decision is corrupted by data you can’t audit.

2. Prompt Injection

Hidden instructions in retrieved context hijack your agent’s behavior. The memory system becomes the attack vector.

3. Compliance Gaps

When regulators ask “why did your AI do that?” — you need receipts. Not “the model thought it was a good idea.”

The EU AI Act kicks in August 2026. High-risk AI systems will require traceability and documentation. If your agent memory can’t provide that, you’re not compliant.


The Insight: Git is Perfect for This

We asked a simple question: what if we used git — the same tool that tracks every change to the Linux kernel — to track every change to AI memory?

Turns out, git’s 20-year-old primitives are perfect for this:

Git PrimitiveMemory Application
Commit SHACryptographic proof of when memory was created
Signed commitsWho (human or agent) created this memory
BlameLine-by-line provenance for every fact
Merkle treeTamper-evident history — change anything, hash chain breaks
BranchesExplore hypothetical memory states

Every memory gets:

  • A cryptographic hash (SHA-256)
  • A timestamp
  • A signature (agent DID or human GPG)
  • A complete edit history
  • The ability to be audited by anyone

The Architecture

GAM uses a three-layer approach:

┌─────────────────────────────────────────┐
│           RETRIEVAL LAYER               │
│   Semantic Index + Temporal Scoring     │
├─────────────────────────────────────────┤
│          PERMISSION LAYER               │
│   W^X Access Control + HITL Gates       │
├─────────────────────────────────────────┤
│           STORAGE LAYER                 │
│   Git Repository (Source of Truth)      │
└─────────────────────────────────────────┘

Storage: Plain Markdown files in a git repo. Human-readable. Version-controlled. Signed.

Permissions: Write XOR Execute (W^X) — critical files like agent identity require human GPG signatures. Daily logs are open. Archives are read-only.

Retrieval: Semantic search + temporal decay. Memories fade over time unless reinforced by access. Point-in-time queries. All built from git, not separate databases.


Cryptographic Identity Chain

Agents have their own cryptographic identities (did:key), derived from a master seed via BIP-32 (same tech as Bitcoin wallets).

Human Owner (GPG Key)
    │
    │ Signs: SOUL.md, AGENTS.md, capability grants
    │
    ▼
Agent Identity (DID, derived via BIP-32)
    │
    │ Signs: Memory updates, daily logs
    │
    ▼
Memory Artifacts (Commit SHA)

Critical changes (agent identity, security config) require human approval. Regular memories get agent signatures. Everything is cryptographically bound.


What This Enables

1. Trust Provenance

“When did you learn this? Who told you? Has it changed?” → Git blame. Commit history. Signature verification.

2. Compliance Readiness

“Prove your AI’s decision trail.” → Export the repo. Every decision has a hash.

3. Memory Editing

“I need to correct/delete something.” → Standard git operations. Full audit trail of the change.

4. Multi-Agent Sync

“Share memories between agents.” → Git push/pull. Selective sharing. Merge conflicts visible.

5. Offline Operation

“Works without network.” → Git works offline. Full functionality, sync when connected.


Try It

GAM is open source and part of the Substr8 stack:

pip install substr8-cli
substr8 gam init
substr8 gam remember "Important fact" --tag business
substr8 gam recall "fact"
substr8 gam verify <memory-id>

Part of the Stack

GAM is one piece of the Substr8 platform for provable AI agents:

  • FDAA: File-Driven Agent Architecture (how agents are defined)
  • ACC: Agent Capability Control (what agents can do)
  • GAM: Git-Native Agent Memory (what agents remember)
  • DCT: Deterministic Capability Tokens (proof of what agents did)

We believe AI systems should be provable, not just probable.

Memory is where that starts.