What Canonical AI Memory Means
Canonical AI Memory is the reviewed memory record an AI can rely on for a bounded project. It is not every chat, dropped report, wiki page, graph edge, or generated summary. It is the current, source-backed operating layer that says what is true enough to act on, what is only background, and what still needs review.
Current UAIX support is template-driven and file-based: public guidance pages, starter bundle templates, local AGENTS.md, readme.human, typed .uai records, wizard-generated files, and targeted checks. It is practical project memory, not a hosted memory service.
The Layer Model
| Layer | Role | Boundary |
|---|---|---|
| Raw sources | Original reports, docs, logs, exports, intake files, and source snapshots. | Preserve for provenance and later review; do not treat as instructions merely because an agent can read them. |
| LLM Wiki | LLM Wiki is the durable source memory for reviewed summaries, long rationale, source-linked pages, contradictions, indexes, and logs. | Optional for UAIX. Wiki memory stays background until promoted into accepted project surfaces. |
| Derived knowledge graph | Optional read-only projection over reviewed wiki and handoff records for routing, retrieval, contradiction discovery, and provenance navigation. | GraphRAG is retrieval assistance over governed evidence, not a new authority layer. |
| AI Memory | AI Memory is the compact portable operating packet: current facts, constraints, decisions, owners, next actions, checks, and pointers to deeper sources. | Keep it small enough to load before work; route bulky history back to durable memory. |
| Project Handoff | Project Handoff is the transfer packet for repository, project, team, vendor, or agent takeover. | It tells the next actor what to read, what not to assume, what may change, and which checks matter. |
| Execution agent | The human or AI doing the work through local tools and repository rules. | Execution is not authority. The agent must cite loaded memory, obey constraints, and report blockers. |
Build Order
- Preserve raw source material with source path, date, owner, sensitivity, and disposition.
- Compile reviewed LLM Wiki or documentation pages when the project needs durable background memory.
- Derive graph IDs, claim nodes, source spans, contradiction links, and retrieval indexes only from reviewed records.
- Export the compact AI Memory packet with current truth, constraints, owners, next actions, and targeted checks.
- Use Project Handoff when responsibility or execution moves to another person, team, vendor, or agent.
- After work completes, promote only reviewed conclusions back into hot memory, docs, code, tests, release notes, roadmap state, machine artifacts, or long-memory evidence.
Review And Promotion Rules
A source becomes canonical AI Memory only after it survives review. Dropped files, generated summaries, old chats, LLM Wiki pages, AIWikis archives, and graph results are source leads until a human or project rule promotes their accepted slice into current project state.
- Keep source provenance attached when a claim moves between layers.
- Mark sensitivity, owner, review state, freshness, and promotion target.
- Preserve contradictions instead of hiding them in a clean summary.
- Abstain when the reviewed source does not support an answer or action.
- Update the smallest current memory surface that future workers actually need.
Where Knowledge Graphs Fit
Knowledge graphs can make canonical AI Memory easier to navigate when they are derived from reviewed pages and handoff records. Stable IDs, claim nodes, source spans, review events, contradiction states, and release snapshots help retrieval systems cite why a fact is usable.
That graph layer is optional. Current UAIX does not provide a hosted graph database, public graph API, public SPARQL endpoint, automatic repository writes, automatic LLM Wiki sync, certification, endorsement, SDK, CLI, official adapter, or UAI-1 conformance evidence for graph exports.
Current Support Boundary
- Current: public guidance, generated starter templates, local files, package manifests, wizard outputs, support-boundary copy, and targeted local checks.
- Current: optional LLM Wiki planning fields and optional knowledge graph planning fields when they help preserve source routing and review boundaries.
- Not current: hosted memory import, automatic site or repository writes, automatic wiki sync, hosted graph services, official adapters, SDKs, CLIs, certification, endorsement, or repo-local
.uaiconformance profiles.
Wizard Guidance
The AI Memory Package Wizard should be used to create the compact operating packet after the project chooses its memory layers. Select the LLM Wiki path when durable source memory exists or is being planned. Select File Handoff when dropped-file intake must be reviewed before project work. Select the combined path only when the receiver must complete real project work plus hot-memory and long-memory/archive outcomes.
For Canonical AI Memory, wizard answers should name the raw source path, durable memory path, graph and retrieval policy if any, promotion owner, evidence log, source boundary, workspace routing rule, targeted checks, and blocked actions.
Related UAIX Records
- AI MemorySupported starter taxonomy and compact operating packet guidance.
- Using UAI Packages With An LLM WikiPractical routing between durable wiki memory and portable packages.
- Project HandoffTransfer packet for repository or project takeover.
- Agent File HandoffDropped-file intake, disposition, work, archive, and evidence rules.
- AI Memory Package WizardGenerate local package files without hosted import or sync claims.
- Project Handoff Context BudgetKeep hot memory small while preserving cold-memory evidence.
- LLMWikis.orgNon-normative handbook for durable wiki memory.
- UAI-1Use UAI-1 only when memory becomes public exchange or validator evidence.
- RoadmapCheck planned and research-track tooling before widening support language.