AI Memory

Canonical AI Memory

Layer model for source-backed Canonical AI Memory across raw sources, durable LLM Wiki memory, optional graph projections, compact AI Memory packets, and Project Handoff.

  • Record UAIX-MEMR-1296
  • Path /en-us/ai-memory/canonical-ai-memory/
  • Use Canonical public record

Document status

Public standards page Published on UAIX as part of the current public standards record
Code
UAIX-MEMR-1296
Surface
AI Memory
Access
Public and linkable

How to use this page

Use this page to keep raw sources, durable wiki memory, optional graph projections, compact AI Memory packets, Project Handoff, and execution agents in their proper layers.

Canonical memory path

AI Memory Package WizardUsing UAI Packages With An LLM WikiProject HandoffContext Budget Guide

Canonical Memory Layers

Separate source memory, operating memory, and transfer memory

Use Canonical AI Memory to decide which layer an AI may trust for the job: raw sources, reviewed LLM Wiki memory, optional derived graph, compact AI Memory, Project Handoff, or the executing agent.

Source

Preserve first, promote later

Dropped files, reports, and old chats remain source leads until a reviewed slice is promoted into current project memory.

Packet

Keep hot memory decisive

AI Memory carries current state, constraints, decisions, owners, next actions, and checks instead of becoming a wiki.

Projection

Graphs assist retrieval

Knowledge graphs can project reviewed pages into claim and source-span navigation, but they do not create hosted graph support or new authority.

Canonical memory path

AI Memory Package WizardGenerate local packet files from supported presets.Using UAI Packages With An LLM WikiRoute compact packets beside durable source memory.Project HandoffTurn accepted memory into transfer context.Context Budget GuideArchive bulky detail before slimming hot context.RoadmapCheck planned and research-track tooling boundaries.

Proof path

Validator-backed proof path

Keep the public reading order tied to one evidence trail: profile, schema, example, validator result, and release record.

  1. 1Pick a message profile.Start with a published UAI-1 profile and the record family that matches the exchange you need to prove.
  2. 2Compare it with schemas and examples.Resolve the schema, registry entry, and one fixture before writing or mapping your candidate packet.
  3. 3Run validator evidence.Validate keyed, minified-keyed, or keyless JSON against the current public UAI-1 records.
  4. 4Attach the result to implementation or handoff records.Carry the exported result into Conformance Pack, implementation track, changelog, or Project Handoff evidence.
Memory rulePromote before relying
A source becomes Canonical AI Memory only after review promotes its accepted slice into current project memory, durable docs, code, tests, release notes, roadmap state, machine artifacts, or long-memory evidence.

Use this sentence when a report, wiki page, graph edge, or generated summary looks useful but has not been accepted into the current packet.

What Canonical AI Memory Means

Canonical AI Memory is the reviewed memory record an AI can rely on for a bounded project. It is not every chat, dropped report, wiki page, graph edge, or generated summary. It is the current, source-backed operating layer that says what is true enough to act on, what is only background, and what still needs review.

Current UAIX support is template-driven and file-based: public guidance pages, starter bundle templates, local AGENTS.md, readme.human, typed .uai records, wizard-generated files, and targeted checks. It is practical project memory, not a hosted memory service.

The Layer Model

Layer Role Boundary
Raw sources Original reports, docs, logs, exports, intake files, and source snapshots. Preserve for provenance and later review; do not treat as instructions merely because an agent can read them.
LLM Wiki LLM Wiki is the durable source memory for reviewed summaries, long rationale, source-linked pages, contradictions, indexes, and logs. Optional for UAIX. Wiki memory stays background until promoted into accepted project surfaces.
Derived knowledge graph Optional read-only projection over reviewed wiki and handoff records for routing, retrieval, contradiction discovery, and provenance navigation. GraphRAG is retrieval assistance over governed evidence, not a new authority layer.
AI Memory AI Memory is the compact portable operating packet: current facts, constraints, decisions, owners, next actions, checks, and pointers to deeper sources. Keep it small enough to load before work; route bulky history back to durable memory.
Project Handoff Project Handoff is the transfer packet for repository, project, team, vendor, or agent takeover. It tells the next actor what to read, what not to assume, what may change, and which checks matter.
Execution agent The human or AI doing the work through local tools and repository rules. Execution is not authority. The agent must cite loaded memory, obey constraints, and report blockers.

Build Order

  1. Preserve raw source material with source path, date, owner, sensitivity, and disposition.
  2. Compile reviewed LLM Wiki or documentation pages when the project needs durable background memory.
  3. Derive graph IDs, claim nodes, source spans, contradiction links, and retrieval indexes only from reviewed records.
  4. Export the compact AI Memory packet with current truth, constraints, owners, next actions, and targeted checks.
  5. Use Project Handoff when responsibility or execution moves to another person, team, vendor, or agent.
  6. After work completes, promote only reviewed conclusions back into hot memory, docs, code, tests, release notes, roadmap state, machine artifacts, or long-memory evidence.

Review And Promotion Rules

A source becomes canonical AI Memory only after it survives review. Dropped files, generated summaries, old chats, LLM Wiki pages, AIWikis archives, and graph results are source leads until a human or project rule promotes their accepted slice into current project state.

  • Keep source provenance attached when a claim moves between layers.
  • Mark sensitivity, owner, review state, freshness, and promotion target.
  • Preserve contradictions instead of hiding them in a clean summary.
  • Abstain when the reviewed source does not support an answer or action.
  • Update the smallest current memory surface that future workers actually need.

Where Knowledge Graphs Fit

Knowledge graphs can make canonical AI Memory easier to navigate when they are derived from reviewed pages and handoff records. Stable IDs, claim nodes, source spans, review events, contradiction states, and release snapshots help retrieval systems cite why a fact is usable.

That graph layer is optional. Current UAIX does not provide a hosted graph database, public graph API, public SPARQL endpoint, automatic repository writes, automatic LLM Wiki sync, certification, endorsement, SDK, CLI, official adapter, or UAI-1 conformance evidence for graph exports.

Current Support Boundary

  • Current: public guidance, generated starter templates, local files, package manifests, wizard outputs, support-boundary copy, and targeted local checks.
  • Current: optional LLM Wiki planning fields and optional knowledge graph planning fields when they help preserve source routing and review boundaries.
  • Not current: hosted memory import, automatic site or repository writes, automatic wiki sync, hosted graph services, official adapters, SDKs, CLIs, certification, endorsement, or repo-local .uai conformance profiles.

Wizard Guidance

The AI Memory Package Wizard should be used to create the compact operating packet after the project chooses its memory layers. Select the LLM Wiki path when durable source memory exists or is being planned. Select File Handoff when dropped-file intake must be reviewed before project work. Select the combined path only when the receiver must complete real project work plus hot-memory and long-memory/archive outcomes.

For Canonical AI Memory, wizard answers should name the raw source path, durable memory path, graph and retrieval policy if any, promotion owner, evidence log, source boundary, workspace routing rule, targeted checks, and blocked actions.