What AI Memory Means In UAIX
UAI AI Memory is a lightweight, portable, file-based standard for durable context. It gives humans and AI agents a reviewable packet of project memory instead of relying on private chat history, hidden model settings, one vendor account, or a stale folder of notes.
AI Memory is not a general knowledge base. It is the compact operating memory a future actor should load before acting: project purpose, current state, constraints, decisions, next actions, owners, trust boundaries, maintenance rules, and targeted checks.
Why Unstructured AI Memory Fails
Unstructured memory fails because it mixes old chats, generated summaries, wiki notes, screenshots, logs, and roadmap guesses without saying what is current or binding. The next agent may miss a red line, believe an obsolete claim, leak sensitive material, or run the wrong checks because the project never named its memory contract.
- No front door: the agent cannot tell where to begin.
- No lifecycle: working state, transfer packets, decisions, onboarding notes, and audit evidence age differently.
- No trust boundary: internal-only material can be handed to an outside vendor or autonomous agent by accident.
- No canonical source: visible examples, downloadable ZIPs, and docs drift because sample files are copied in several places.
Why File-Based Memory Works
File-based memory is boring in the best way. It can be reviewed in a pull request, zipped for a handoff, redacted before external sharing, loaded by different agents, archived with a release, and tested for drift. UAIX uses plain text and deterministic manifests so people can inspect what an AI is about to treat as context.
AI Memory Taxonomy
UAIX now treats Project AI Memory as the ongoing working-memory configuration and Project Handoff as a subtype of AI Memory for transfer. Additional configurations exist only when they have a different lifecycle, trust boundary, or consumption pattern. Team Memory and Product Memory are documented as views over existing bundles, while certification-style or regulated-data memory is deferred until the public evidence and safety process exist.
Choose the right AI Memory configuration
These supported starter bundles are presets over one canonical file-template registry. Shared files come from the same template IDs; bundle-specific guidance is added through metadata and overlays.
| Configuration | Use when | Lifecycle and trust boundary | Download |
|---|---|---|---|
Project AI Memoryproject-ai-memory
|
Use when a project is active and context must persist across many AI sessions without turning the bundle into a full knowledge base. |
Lifecycle: Maintained continuously; current-state and next-action files change often, decisions and constraints change carefully. Trust: Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent. |
uai-ai-memory-starter.zip |
Project Handoffproject-handoff
|
Use when the next actor must take over a project safely and needs current state, constraints, decisions, checks, and ownership context. |
Lifecycle: Prepared before transfer, reviewed at acceptance, and updated when responsibility or support boundaries change. Trust: Can be internal or external, but external handoffs must be sanitized and approved before sharing. |
uai-project-handoff-starter.zip |
Agent Session Memoryagent-session-memory
|
Use when an agent or tool needs to resume a task with enough state to continue without replaying a whole chat transcript. |
Lifecycle: Created for a run or task, updated frequently, then merged into project memory or archived when the task closes. Trust: Often operational and sensitive. Keep permissions, tool access, blocked actions, and cleanup state explicit. |
uai-agent-session-memory-starter.zip |
Onboarding Memoryonboarding-memory
|
Use when the first job is orientation rather than ownership transfer, incident review, or deep rationale capture. |
Lifecycle: Reviewed before each onboarding cohort or external introduction; kept concise and introductory. Trust: Usually shareable after review, but remove internal strategy, customer data, credentials, and privileged operations. |
uai-onboarding-memory-starter.zip |
Decision Memorydecision-memory
|
Use when rationale and tradeoffs matter more than current task status. |
Lifecycle: Append-first and review-heavy; reversals should explain what changed rather than erase history. Trust: May include internal strategy. Review before exposing externally or to agents with broad write permissions. |
uai-decision-memory-starter.zip |
Client or Vendor Handoff Memoryexternal-handoff-memory
|
Use when a client, vendor, partner, or outside agent needs enough context to continue work without receiving internal-only memory. |
Lifecycle: Prepared as an export, redacted, approved, shared, and archived with a dated changelog entry. Trust: Strict external boundary. Remove secrets, credentials, private customer data, legal strategy, internal pricing, and unsupported claims. |
uai-external-handoff-memory-starter.zip |
Incident or Audit Memoryincident-audit-memory
|
Use when facts, timestamps, mitigations, owners, evidence links, and follow-up commitments need to travel together. |
Lifecycle: Opened during review, updated as evidence is confirmed, closed with follow-up owners and a retained audit trail. Trust: Potentially sensitive. Sanitize customer data, security details, credentials, legal material, and private evidence before external sharing. |
uai-incident-audit-memory-starter.zip |
LLM Wiki Export Memoryllm-wiki-export-memory
|
Use when a large internal wiki needs a small, reviewable, portable packet for a project, handoff, onboarding, or agent task. |
Lifecycle: Generated from reviewed wiki material, checked against source citations, then promoted or discarded after use. Trust: Wiki material is background until reviewed. Cite sources, flag uncertainty, and redact private material before export. |
uai-llm-wiki-export-memory-starter.zip |
Views and presets over supported bundles
- Team Memory: A lightweight shared team view over Project AI Memory plus owner and onboarding records. Model this as a view until UAIX has stronger role, permission, and multi-project guidance.
- Product Memory: A durable product or feature-area view over project state, roadmap, decisions, and constraints. Model this as a view because the underlying files are the same as Project AI Memory plus Decision Memory.
Deferred configurations
- Certification Memory: Evidence packet for formal certification or endorsement workflows. Deferred because UAIX does not currently publish certification, endorsement, or hosted validation support.
- Regulated Data Memory: Memory that intentionally carries restricted personal, customer, legal, or compliance-sensitive material. Deferred until secure storage, redaction, access-control, and approval processes are outside the starter-bundle boundary.
Which Configuration To Choose
| Situation | Use | Why |
|---|---|---|
| An active project needs continuity across AI sessions. | Project AI Memory | It keeps current state, constraints, decisions, next actions, and agent instructions alive without becoming a full wiki. |
| Ownership, execution, or responsibility is moving. | Project Handoff | It packages the transfer brief, acceptance criteria, owners, constraints, and verification plan. |
| An agent run was interrupted or must resume later. | Agent Session Memory | It keeps task-local state short-lived and prevents a whole chat transcript from becoming project truth. |
| A new human, contractor, stakeholder, or agent needs a curated start. | Onboarding Memory | It emphasizes overview, glossary, owners, first actions, and safe boundaries. |
| Rationale matters more than status. | Decision Memory | It preserves tradeoffs, rejected options, reversals, and open questions. |
| A client, vendor, or outside agent needs context. | Client or Vendor Handoff Memory | It adds redaction and approval guidance around a stricter external trust boundary. |
| An incident or audit needs a portable packet. | Incident or Audit Memory | It keeps timeline, evidence references, decisions, mitigations, owners, and follow-up together. |
| A deep wiki needs a portable snapshot. | LLM Wiki Export Memory | It exports reviewed wiki material into a compact packet without letting the wiki override project truth. |
| The organization needs durable, searchable institutional knowledge. | LLM Wiki | It is stronger for deep internal documentation, source synthesis, long-lived research, and broad knowledge accumulation. |
Project AI Memory And Project Handoff
Project Handoff is a subtype of UAI AI Memory. AI Memory is the broad standard: durable AI-readable context. Project Handoff is the transfer pattern: read the front door, load the selected files, summarize current truth, confirm constraints, name intended touchpoints, and name targeted checks before broad work.
For a small project, Project AI Memory and Project Handoff can include many of the same files. For a larger organization, Project AI Memory stays alive during everyday work, while Project Handoff is prepared and reviewed when responsibility moves.
Inspect The Project AI Memory Starter
The visible files below are rendered from the same canonical template registry used by every supported bundle preset. The generated manifest is included in the ZIP and displayed with the other files so readers can inspect bundle ID, use case, lifecycle, trust boundary, file list, template IDs, and checksums.
Live Starter Bundle
Project AI Memory
The ZIP is generated on request from the 16 visible canonical files below, including the generated manifest. The download, page samples, and bundle presets share one source of truth.
Use when a project is active and context must persist across many AI sessions without turning the bundle into a full knowledge base.
UAI_MEMORY_MANIFEST.json
{
"bundle_id": "project-ai-memory",
"name": "Project AI Memory",
"description": "Ongoing working memory for an active project that needs durable context across humans, models, agents, and sessions.",
"intended_use_case": "Use when a project is active and context must persist across many AI sessions without turning the bundle into a full knowledge base.",
"lifecycle": "Maintained continuously; current-state and next-action files change often, decisions and constraints change carefully.",
"download_filename": "uai-ai-memory-starter.zip",
"display_order": 10,
"trust_boundary_notes": "Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent.",
"included_files": [
"README.md",
"PROJECT_OVERVIEW.md",
"CURRENT_STATE.md",
"DECISIONS.md",
"OPEN_QUESTIONS.md",
"NEXT_ACTIONS.md",
"RISKS_AND_CONSTRAINTS.md",
"CONTACTS_AND_OWNERS.md",
"AGENT_INSTRUCTIONS.md",
"CHANGELOG.md",
"AGENTS.md",
"readme.human",
".uai/context.uai",
".uai/constraints.uai",
".uai/memory.uai"
],
"shared_files": [
"README.md",
"PROJECT_OVERVIEW.md",
"CURRENT_STATE.md",
"DECISIONS.md",
"OPEN_QUESTIONS.md",
"NEXT_ACTIONS.md",
"RISKS_AND_CONSTRAINTS.md",
"CONTACTS_AND_OWNERS.md",
"AGENT_INSTRUCTIONS.md",
"CHANGELOG.md",
"AGENTS.md",
"readme.human",
".uai/context.uai",
".uai/constraints.uai",
".uai/memory.uai"
],
"bundle_specific_files": [],
"optional_sections": [
"Add wiki links only when deeper memory exists and is reviewed before promotion."
],
"overlays": [
"Use the shared README and AGENTS.md templates with Project AI Memory labels and ongoing-maintenance guidance."
],
"files": [
{
"path": "README.md",
"template_id": "readme",
"source": "template:readme@1",
"bytes": 1611,
"sha256": "5efaf5b53f6227b572c8ad5b7395bae079e0599b1e76f92f8a041405d2f4036a"
},
{
"path": "PROJECT_OVERVIEW.md",
"template_id": "project-overview",
"source": "template:project-overview@1",
"bytes": 469,
"sha256": "41a2059c111700426da7661deb7f1ca50782fa912999597c04e9895287c309c2"
},
{
"path": "CURRENT_STATE.md",
"template_id": "current-state",
"source": "template:current-state@1",
"bytes": 305,
"sha256": "cd2c51f7cef88d3fc145e027c2c285366cae4fe71d64fdbf8f1b0961fd4f1cd0"
},
{
"path": "DECISIONS.md",
"template_id": "decisions",
"source": "template:decisions@1",
"bytes": 308,
"sha256": "65e0acab758e5fa3e1b923bf3ffa41f34a711173e88961f33d594f30e6ebc146"
},
{
"path": "OPEN_QUESTIONS.md",
"template_id": "open-questions",
"source": "template:open-questions@1",
"bytes": 372,
"sha256": "22feb8d6f040220cfda353994a6832a1a020c21eec960927029bb2ef31ca1382"
},
{
"path": "NEXT_ACTIONS.md",
"template_id": "next-actions",
"source": "template:next-actions@1",
"bytes": 350,
"sha256": "c015fa569c4c248228ac9f428ef852350ee0d66af929024811e4b8b7df018067"
},
{
"path": "RISKS_AND_CONSTRAINTS.md",
"template_id": "risks-and-constraints",
"source": "template:risks-and-constraints@1",
"bytes": 806,
"sha256": "b175d690601b49dd54893f1e4c50bcca871fc3db0104751c8bff77098b16b40b"
},
{
"path": "CONTACTS_AND_OWNERS.md",
"template_id": "contacts-and-owners",
"source": "template:contacts-and-owners@1",
"bytes": 275,
"sha256": "559d3174932140af4e23ec5edd08ba6bf7232d4c979cfa85dbda2552ccdf6b7e"
},
{
"path": "AGENT_INSTRUCTIONS.md",
"template_id": "agent-instructions",
"source": "template:agent-instructions@1",
"bytes": 651,
"sha256": "93153882ee079198a030318fd6bf74e2c92f61c3aff137d0574a972272d0c5f5"
},
{
"path": "CHANGELOG.md",
"template_id": "changelog",
"source": "template:changelog@1",
"bytes": 166,
"sha256": "21e2db19d796ce5bcf6bf052bd3c650d1790a40c87744b2d2634421b6ff7cac2"
},
{
"path": "AGENTS.md",
"template_id": "agents-md",
"source": "template:agents-md@1",
"bytes": 1779,
"sha256": "996b53cc7b7b35e4c5723772a7121ec955dda89e6ad493c016213c4db133ea43"
},
{
"path": "readme.human",
"template_id": "readme-human",
"source": "template:readme-human@1",
"bytes": 1096,
"sha256": "f4afa84d2ee8bba59bf606306396b8e49bec556496be2f937c97390d41b3ccfd"
},
{
"path": ".uai/context.uai",
"template_id": "uai-context",
"source": "template:uai-context@1",
"bytes": 273,
"sha256": "a3f78d0d33d3f810a179d6c7dc8bb503a4b9d72e014ccdc6487629a9534dafc4"
},
{
"path": ".uai/constraints.uai",
"template_id": "uai-constraints",
"source": "template:uai-constraints@1",
"bytes": 507,
"sha256": "b8b3a3d331be9eb5a12b8267c72c1f96d423860cf992453e52898a388e30e81b"
},
{
"path": ".uai/memory.uai",
"template_id": "uai-memory",
"source": "template:uai-memory@1",
"bytes": 1136,
"sha256": "f4b07613c9882d2d0b3c1efe5621847ded949b58fe6a5560e07ad3e9a47b1405"
}
]
}
README.md
# Project AI Memory
This starter bundle is a UAI AI Memory configuration. It keeps portable, human-readable context in files that another person, team, or AI agent can inspect before acting.
## Bundle Purpose
Ongoing working memory for an active project that needs durable context across humans, models, agents, and sessions.
## Use This When
Use when a project is active and context must persist across many AI sessions without turning the bundle into a full knowledge base.
## Lifecycle
Maintained continuously; current-state and next-action files change often, decisions and constraints change carefully.
## Trust Boundary
Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent.
## Included Files
- `README.md`
- `PROJECT_OVERVIEW.md`
- `CURRENT_STATE.md`
- `DECISIONS.md`
- `OPEN_QUESTIONS.md`
- `NEXT_ACTIONS.md`
- `RISKS_AND_CONSTRAINTS.md`
- `CONTACTS_AND_OWNERS.md`
- `AGENT_INSTRUCTIONS.md`
- `CHANGELOG.md`
- `AGENTS.md`
- `readme.human`
- `.uai/context.uai`
- `.uai/constraints.uai`
- `.uai/memory.uai`
## Maintenance Rule
Update the files that changed because project truth changed. Do not turn this bundle into a dump of old chats, private notes, raw logs, or unreviewed generated summaries.
## Review Before Sharing
- Remove secrets, credentials, private keys, tokens, and raw customer data.
- Remove internal-only strategy unless the recipient is approved for it.
- Keep support, security, legal, compliance, certification, and endorsement claims tied to public evidence.
- Make uncertain or unreviewed material explicit.PROJECT_OVERVIEW.md
# Project Overview
## Purpose
Describe what the project exists to do, who it serves, and what outcome matters most.
## Current Scope
- In scope:
- Out of scope:
- Current public or operational surface:
## Source Of Truth
- Code:
- Docs:
- Machine artifacts:
- Release notes or changelog:
## Success Criteria
- A new human or AI can understand the project without private chat history.
- Claims are tied to evidence.
- Constraints are visible before work begins.CURRENT_STATE.md
# Current State
## What Is True Now
- Live or supported now:
- Experimental now:
- Planned but not supported:
## Recently Changed
- YYYY-MM-DD:
## Active Work
- Current focus:
- Active owner:
- Targeted checks:
## Stale Or Risky Context
List anything that should not be trusted without rechecking.DECISIONS.md
# Decisions
Keep decisions append-first when possible. If a decision is reversed, explain what changed instead of silently rewriting history.
## Decision Record Template
### YYYY-MM-DD - Decision Title
- Decision:
- Context:
- Options considered:
- Tradeoffs:
- Consequences:
- Reversal trigger:
- Owner:OPEN_QUESTIONS.md
# Open Questions
Use this file for questions that should block, steer, or qualify future work.
| Question | Why It Matters | Owner | Needed By | Status |
|---|---|---|---|---|
| | | | | open |
## Escalation Rule
If a question affects safety, privacy, legal commitments, public support claims, production data, or destructive operations, stop and ask before acting.NEXT_ACTIONS.md
# Next Actions
Keep this file current and actionable. Remove completed work or move meaningful completions to `CHANGELOG.md`.
## Now
- [ ]
## Next
- [ ]
## Later
- [ ]
## Done Means
- The changed files or records are named.
- Targeted checks have run or the remaining risk is explicit.
- Durable memory is updated when project truth changes.RISKS_AND_CONSTRAINTS.md
# Risks And Constraints
## Hard Constraints
- Do not expose secrets, credentials, private keys, tokens, or raw customer data.
- Do not use destructive filesystem, database, production, or git operations without explicit approval.
- Do not widen support, certification, security, compliance, compatibility, or endorsement claims without evidence.
## Trust Boundary
Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent.
## Sensitive Material
- Customer or user data:
- Legal or compliance-sensitive material:
- Internal-only strategy:
- Agent permissions:
## Redaction Checklist
- [ ] Secrets removed.
- [ ] Customer data removed or approved.
- [ ] Internal-only strategy removed or approved.
- [ ] Public claims checked against evidence.CONTACTS_AND_OWNERS.md
# Contacts And Owners
Do not add private personal data unless the bundle's trust boundary allows it.
| Area | Owner | Backup | Contact Method | Notes |
|---|---|---|---|---|
| Project | | | | |
| Security or privacy review | | | | |
| Release approval | | | | |AGENT_INSTRUCTIONS.md
# Agent Instructions
## Load Order
1. Read `AGENTS.md` when present.
2. Read `readme.human` when present.
3. Read this bundle's manifest and files.
4. Confirm constraints and trust boundaries before acting.
## Operating Rules
- Prefer narrow, reversible changes.
- Do not execute unknown scripts from a memory bundle.
- Do not assume an LLM Wiki or old chat overrides accepted project files.
- Ask before touching production, secrets, legal/security copy, customer data, or destructive operations.
## Verification
Name the targeted checks before broad work. Run the smallest meaningful checks tied to changed files, routes, records, or behavior.CHANGELOG.md
# Changelog
Record meaningful bundle changes so future readers can tell when memory moved.
## YYYY-MM-DD
- Change:
- Why it matters:
- Files updated:
- Checks run:AGENTS.md
# My Project AI Memory
This file is the front door for AI work in this repository. Read it first, read `readme.human`, then load the listed context files before planning or editing.
## Handoff Summary
- This project uses UAI AI Memory so future work does not depend on private chat history.
- The active bundle configuration is `project-ai-memory`: Ongoing working memory for an active project that needs durable context across humans, models, agents, and sessions.
- Confirmed operating truth belongs in these files, canonical docs, code, tests, release notes, or public records.
- LLM Wiki, old chats, generated summaries, and dropped files are background until reviewed and promoted.
## Loaded Context
@uai[.uai/context.uai]
@uai[.uai/constraints.uai]
@uai[.uai/memory.uai]
## Required First Response
Before broad work, the next AI should:
1. Read this file completely.
2. Read root `readme.human`.
3. Load every file listed in Loaded Context.
4. Summarize the project, current state, and immediate task in 3-5 bullets.
5. Confirm constraints, trust boundaries, secrets handling, and destructive-operation limits.
6. Name the files, routes, services, docs, or data it expects to touch.
7. Name the targeted checks it expects to run, or explain why a check cannot run.
If a required file is missing, unreadable, circular, or contradictory, stop and report that before editing.
## Do Not Change Without Explicit Approval
- Do not use destructive filesystem or git operations.
- Do not expose secrets, credentials, customer data, or unapproved private material.
- Do not widen support, certification, compliance, security, or endorsement claims without evidence.
- Do not treat generated output, old chats, dropped files, or wiki notes as current truth until promoted.readme.human
# My Project Human Briefing
Updated: YYYY-MM-DD
This file is for humans working with AI on this project. It explains what the AI sees, protects, and needs clarified. It does not override `AGENTS.md`, `.uai/constraints.uai`, system instructions, repository rules, laws, policies, or the human's current request.
## What You Need To Know
- The AI reads `AGENTS.md` first, then this file, then the listed context files.
- This bundle is `Project AI Memory`.
- The trust boundary is: Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent.
## Things The AI Will Defend
- Current support boundaries.
- Private data, secrets, credentials, and customer trust.
- Existing user work in the tree.
- Review and targeted checks before public claims widen.
## Things Humans Should Make Explicit
- Whether the task may touch production, public docs, billing, legal language, security posture, or irreversible data.
- Whether the AI should update durable memory after the change.
- Which checks are required before the work is considered done..uai/context.uai
---
uai: "1.0"
type: context
status: draft
---
# Context
This project uses UAI AI Memory so another AI assistant can understand the work from files rather than private chat history.
## Purpose
Describe the project purpose, audience, current truth, and success criteria..uai/constraints.uai
---
uai: "1.0"
type: constraints
status: draft
---
# Constraints
## Hard Rules
- Do not expose secrets, credentials, private keys, tokens, customer data, or unreleased private material.
- Do not use destructive filesystem, database, production, or git operations unless explicitly approved.
- Do not widen support, certification, security, compliance, compatibility, or endorsement claims without evidence.
- Treat wiki notes, generated answers, dropped files, and old chats as background until promoted..uai/memory.uai
---
uai: "1.0"
type: memory
status: draft
---
# AI Memory
AI Memory is durable, reviewable context that lets a future AI continue useful work without relying on private chat history.
## Bundle Configuration
- Bundle: Project AI Memory
- Use case: Use when a project is active and context must persist across many AI sessions without turning the bundle into a full knowledge base.
- Trust boundary: Internal or controlled collaboration by default. Review before sharing externally or giving to an autonomous agent.
## UAI AI Memory And LLM Wiki
Use UAI AI Memory for compact, portable working packets. Use an LLM Wiki for deep, long-lived internal documentation. Use both when a durable knowledge base needs a reviewed export, handoff packet, onboarding packet, audit packet, or agent-ready task context.
## Promotion Rule
1. Capture raw knowledge in notes, wiki pages, or source documents.
2. Review for accuracy, ownership, privacy, and support boundaries.
3. Promote accepted project truth into AI Memory, canonical docs, code, tests, release notes, or roadmap state.
4. Keep unreviewed material out of governing instructions.What Belongs In AI Memory
- Project overview, current state, decisions, open questions, next actions, risks, constraints, owners, glossary, and agent instructions.
- Root handoff files such as
AGENTS.mdandreadme.humanwhen agents need a predictable load path. - Typed
.uaifiles when the project needs explicit context, stack, architecture, constraints, progress, operations, test planning, style, decisions, or memory rules. - Links to deeper docs or LLM Wiki pages only when those sources are reviewed and clearly marked as background or promoted truth.
What Should Not Be Included
- Secrets, credentials, private keys, tokens, connection strings, or unreviewed production logs.
- Raw customer, patient, employee, or user data unless a secure approved process exists.
- Private legal analysis, internal-only strategy, pricing, security details, or unsupported support claims.
- Old chats, generated summaries, dropped files, and LLM Wiki pages treated as truth without review.
- Executable payloads that a future agent might run without human approval.
Privacy And Trust Boundaries
Choose the bundle by trust boundary, not by name alone. Internal Project AI Memory can carry more operational detail than an external handoff. Agent Session Memory may include tool permissions and temporary work state that should be archived quickly. External Handoff, Incident/Audit, and LLM Wiki Export packets need redaction, approval, and clear source notes before sharing.
- Review secrets and credentials before every share.
- Minimize customer or user data.
- Mark internal-only strategy and legal material.
- Name agent permissions and blocked actions.
- Prefer sanitized exports over raw internal memory.
Maintenance Model
AI Memory is not a dumping ground. Keep high-churn files current and keep durable files stable. CURRENT_STATE.md and NEXT_ACTIONS.md can change often. DECISIONS.md should be append-first or carefully revised. RISKS_AND_CONSTRAINTS.md and AGENT_INSTRUCTIONS.md should be reviewed whenever permissions, production boundaries, support claims, or safety posture change. CHANGELOG.md should explain meaningful bundle updates.
How Agents Consume AI Memory
- Read the manifest and front-door files before acting.
- Load only the files required by the bundle and current task.
- Report missing, circular, contradictory, unreadable, or oversized memory before broad work.
- Summarize current truth, constraints, intended touchpoints, and checks before editing.
- Treat LLM Wiki, old chat, generated summaries, and dropped files as background until reviewed and promoted.
How Humans Review And Approve Memory
Humans should be able to review the same files the AI will load. Before sharing a bundle or using it to steer an autonomous agent, check ownership, dates, stale claims, sensitive data, trust boundaries, and whether planned work is clearly separated from current support. Ask the AI to name exactly which memory files changed and why.
UAIX AI Memory And LLM Wiki
UAIX AI Memory and LLM Wiki solve different memory problems. UAIX AI Memory is a portable working packet for continuity, handoffs, onboarding, external collaboration, audits, quick exports, and agent-ready context. LLMWikis.org represents the stronger pattern for deep, long-lived internal documentation and durable organizational knowledge.
Mature organizations may use both: LLM Wiki as the durable internal knowledge base, and UAIX AI Memory bundles as portable snapshots, working context, onboarding exports, handoff packets, audit packets, or agent-run context.
When To Use UAIX AI Memory
- Use Project AI Memory when a project is active and context needs to persist across sessions.
- Use Project Handoff when ownership, execution, or responsibility is moving.
- Use Agent Session Memory when an AI agent needs resumable task context.
- Use Onboarding Memory when a human or agent needs a curated starting point.
- Use Decision Memory when rationale and tradeoffs matter more than status.
When To Use LLM Wiki
- Use LLM Wiki when the organization needs deep, durable, searchable institutional knowledge.
- Use it for long source summaries, research trails, comparisons, policy background, and internal education.
- Keep it informative rather than governing until accepted facts are promoted into AI Memory, docs, code, tests, release notes, roadmap state, or public evidence.
When To Use Both
Use both when a durable knowledge base needs portable working packets. The LLM Wiki remains expansive; the AI Memory bundle remains decisive. For the longer rationale, read LLM Wiki vs. UAIX Project Handoff and LLM Wiki and UAIX Project Handoff.
How Samples, Manifests, And ZIPs Stay Synchronized
Rendered sample files, bundle manifests, download links, and generated ZIPs all resolve through the same canonical registry. There are no stale static starter ZIP assets and no ZIP-only sample files. If a shared file belongs to multiple bundles, it is selected by the same template ID. If a bundle needs variation, the variation is an explicit parameter, optional section, or overlay recorded in the manifest.
Public Route And Alias
The canonical UAIX page for this topic is /en-us/ai-memory/. The requested /AI_Memory entry path redirects here as the search-friendly entry alias while canonical UAIX public routes remain clean, locale-prefixed paths.
Related UAIX Records
- Project HandoffThe transfer subtype of UAI AI Memory.
- Agent File HandoffChat-start intake for dropped files before broad work continues.
- AGENTS.md .uai Linking SpecificationLink syntax, loader behavior, and typed-file background.
- UAI-1The public exchange and evidence contract when memory becomes interoperability evidence.
- ValidatorEvidence path for public UAI-1 claims.
- LLMWikis.orgDeep LLM Wiki memory for durable organizational knowledge.
- RoadmapCurrent versus planned tooling boundaries.
- ChangelogDated public change trail.