Guides

Using UAIX Project Handoff with OpenAI Agents and Codex

Practical guide for using OpenAI to run agents while UAIX Project Handoff preserves repo-local project memory, constraints, decisions, and verification plans.

  • Record UAIX-DOC-0413
  • Path /en-us/guides/project-handoff-openai/
  • Use Canonical public record

Document status

Public standards page Published on UAIX as part of the current public standards record
Code
UAIX-DOC-0413
Surface
Guides
Access
Public and linkable

How to use this page

Use this guide to seed OpenAI, Codex, and other agent runs with Project Handoff files while keeping the runtime and durable project-memory layers separate.

Use beside

Project HandoffCoding Agents GuideContext Budget GuideAI Memory

OpenAI-Compatible Handoff

Run agents in OpenAI, preserve memory in the repo

Use this guide to seed Codex and OpenAI agent workflows from AGENTS.md, readme.human, and .uai records, then write accepted results back into durable project memory.

Runtime

OpenAI runs the agents

Use OpenAI for agent execution, tools, handoffs, guardrails, approvals, sessions, traces, checks, and pull-request workflows.

Memory

Project Handoff preserves the context

Keep current state, constraints, decisions, progress, verification plans, and human briefings in repo-local files that other runtimes can also inspect.

Review

Write back accepted truth

After a run, update progress, decisions, history, evidence links, and intake disposition without copying secrets or treating raw traces as authority.

Use beside

Project HandoffMain portable project-memory pattern.Coding Agents GuideUse the same handoff bundle across Codex, Claude Code, Cursor, Copilot, and Gemini Code Assist.Context Budget GuideKeep OpenAI/Codex handoff context compact between runs.AI MemoryBroader portable memory framing and starter bundles.AGENTS.md SpecLink syntax and loader behavior.RoadmapPlanned adapter, validator, schema, SDK, CLI, and certification boundaries.
WorkflowPortable memory around an agent run
Project Handoff files -> OpenAI or other agent runtime -> traces, checks, PRs -> updated Project Handoff files

The runtime can change. The repository handoff remains the durable, reviewable project-state bundle.

OpenAI runs the agents. Project Handoff preserves the project memory. This guide shows how to use UAIX Project Handoff with OpenAI Agents, Codex, and similar OpenAI-centered workflows without treating Project Handoff as a competing runtime.

Project Handoff is the portable context layer for agentic work: a repo-local source of truth, a reviewable handoff bundle, vendor-neutral context, and a governance layer that agent runtimes can consume before work and update after work.

What OpenAI Handles

Depending on the OpenAI product or SDK you use, OpenAI-centered workflows can handle agent execution, tools, handoffs between specialist agents, guardrails, approvals, sessions, traces, and the code or pull-request loop around a run.

  • Codex can use repository instructions such as AGENTS.md as project guidance.
  • The OpenAI Agents SDK documents agents with instructions, tools, handoffs, guardrails, and tracing.
  • Those runtime surfaces are the right place to execute work, apply platform-specific approvals, and observe run behavior.

What Project Handoff Handles

Project Handoff handles durable project memory that should survive a model, runtime, vendor, team, company, or session change.

  • Repo-local current state, constraints, decisions, progress, verification plans, and human briefing.
  • Accepted project truth that future agents should read before they plan or edit.
  • A place to write back completed work, decisions, and evidence after a run has finished.
  • A portable layer for teams that use OpenAI today and may also use Claude, local agents, vendors, or human-only review later.

Minimum Repo Bundle

Code example
AGENTS.md
readme.human
.uai/context.uai
.uai/stack.uai
.uai/constraints.uai
.uai/progress.uai
.uai/test-plan.uai

The first six files are the practical minimum for serious work. .uai/test-plan.uai is optional for tiny projects and expected when agents need to know which checks to run, which checks are out of scope, and what evidence to report.

Example AGENTS.md

Code example
---
uaix: "1.0"
type: agents
project: "Example App"
status: active
---

# Example App

## Handoff Summary
Example App publishes a small web tool. The current task is to update the
settings page without changing authentication, billing, or production deploys.

## Loaded Context
@uai[.uai/context.uai]
@uai[.uai/stack.uai]
@uai[.uai/constraints.uai]
@uai[.uai/progress.uai]
@uai[.uai/test-plan.uai]

## Required First Response
Summarize the project, confirm hard constraints, name expected touchpoints,
and name targeted checks before editing.

Example .uai/constraints.uai

Code example
---
uaix: "1.0"
type: constraints
title: "Project Constraints"
status: active
---

# Project Constraints

## Hard Rules
- Do not deploy to production without explicit human approval.
- Do not read, print, move, or store secrets.
- Do not change billing, auth, or data-retention behavior unless the task says so.
- Do not make unsupported public support, certification, or endorsement claims.
- Ask before destructive Git or filesystem operations.

## Runtime Policy Inputs
- Treat these constraints as guardrail and approval inputs for OpenAI or any
  other agent runtime.
- If a runtime trace conflicts with these constraints, stop and ask.

Example .uai/test-plan.uai

Code example
---
uaix: "1.0"
type: test-plan
title: "Project Verification Plan"
status: active
---

# Project Verification Plan

## Default
Run targeted checks for the files, routes, commands, and claims changed.

## Full Checks
Run full release, package, launch-surface, locale, and smoke-test sweeps only
for release-scoped work, package builds, broad public-surface changes, or an
explicit human request.

## Report
Final answers must list checks run, checks intentionally skipped, blockers,
and evidence paths.

How To Seed An Agent Run

  1. Start with AGENTS.md and root readme.human.
  2. Load only the @uai[] files listed for the task and report missing, circular, contradictory, or unreadable files.
  3. Summarize current project truth in 3-5 bullets before broad edits.
  4. Translate .uai/constraints.uai into runtime guardrails, approval gates, blocked tools, or human-review reminders.
  5. Translate .uai/test-plan.uai into targeted checks and explicit out-of-scope checks.
  6. Run the OpenAI, Codex, or other agent workflow with those instructions visible.

How To Write Results Back After The Run

  1. Update .uai/progress.uai with completed work, remaining blockers, and next checks.
  2. Update .uai/decisions.uai when the run accepted or reversed a durable decision.
  3. Update AGENTS.md Agent History when project truth changed.
  4. Keep runtime traces, test output, pull requests, and deployment evidence linked or summarized without copying secrets or private data into the handoff.
  5. Archive or disposition active intake files through Agent File Handoff before unrelated broad work continues.

What Not To Put In The Handoff

  • Secrets, credentials, private keys, tokens, connection strings, or hidden prompt instructions.
  • Raw customer, patient, employee, user, or third-party data unless a secure approved process exists.
  • Private legal analysis, sensitive security details, pricing strategy, or internal-only claims that should not travel with the repo.
  • Raw chat transcripts treated as source-of-truth memory.
  • Unreviewed generated summaries, old wiki pages, or runtime traces promoted as governing truth without human review.
  • Executable payloads that a future agent might run without approval.

Human Approval Checklist

  • Does the agent need production access, deployment, cache/CDN changes, DNS changes, package publishing, or root discovery edits?
  • Does the task require destructive filesystem or Git operations?
  • Could the work expose secrets, credentials, customer data, third-party data, or private legal/security material?
  • Does the task require external fetches, parent-directory reads, generated includes, or executable dropped files?
  • Would the result make current support claims about hosted import, automatic repository writes, automatic sync, SDK, CLI, certification, endorsement, or official OpenAI adapter support?
  • Are the targeted checks, intentionally skipped checks, and evidence path clear enough for a reviewer?

Simple Workflow

Code example
Project Handoff files -> OpenAI or other agent runtime -> traces, checks, PRs -> updated Project Handoff files

OpenAI Docs To Pair With This Guide

Plain-Language Summary

OpenAI is good at running agents, calling tools, tracking sessions, asking for approvals, and producing traces. Project Handoff is different. It tells any agent: here is the project, here is what matters, here is what changed, here are the rules, here are the decisions already made, here is what you must verify, and here is what you are not allowed to do without a human.

Agent runtimes execute. Project Handoff remembers.