Context and AI Collaboration
/ Published:
In experimenting with AI to build software—some simple things, some more complex—I've noticed a few things:
- While working on the more complex projects I sometimes lose track of where I've left off and where I need to pick up.
- I repeat myself to remind the AI tools what we are trying to build.
- I am not (yet?) set on one particular toolset -- I've experimented with different ones, sometimes on the same project.
Many times over this past year, I've found myself searching for ways of making this workflow better. This evolved organically, and eventually I started capturing project context in a file named CONTEXT.md in my git repo.
How I use CONTEXT.md
I initially used CONTEXT.md to capture at a high level "What we're building and why" as well as the "Current status" and "Resolved issues" for the project.
- What we're building and why: Provides clarity around what we're trying to build. In one project, I created a Notion-Git tool that syncs a git repo's TODO.md file with a Notion database, including dates, priority, and status. I wanted to the tool to sync BOTH ways. Without that key piece of context, AI tools were proposing code that was a one-way sync rather than both.
- Current status: Tracks what we've done and what is next to make the next session's focus clear -- to me, and to the AI.
- Resolved issues: Avoids recurring mistakes, or heading down paths that you've already ruled out; in refactoring my blog from Gatsby to 11ty, I had to give the following prompt: "Somehow in this, we broke the image paths in the site. Did you get too aggressive with removing things?" -- I wanted to make sure this was well documented in the Resolved issues area so that it was really clear where to store globally used images and images in each blog post so that we didn't run into this again.
What about memory?
While AI assistants now have memory features, I find myself wanting something more structured. At the start of each session, I want to quickly orient both myself AND the AI: Where are we in the overall project? What did we accomplish last time? What's blocking us? What should we tackle today?
While memory can help with this, at least for now:
- Memory alone doesn't solve documentation needs
- You can't version control or collaborate around AI memory
- Different AIs can't share memory between them
- I wanted explicit, reviewable context that lives with the code, not in an AI's memory
CONTEXT.md became incredibly valuable when switching between different AI tools - I could ask one AI to review another AI's work and immediately it would have the full context available. The file becomes a handoff document between collaborators, whether those collaborators are human or AI.
My current CONTEXT.md
I'm currently thinking of CONTEXT.md as a collaborative project brief that evolves as you build. It lives alongside your code and captures:
- What we're building and why - the problem, core functionality, and target users
- Current status - working features, in-progress items, known issues
- Architecture decisions - and crucially, WHY you made them
- Session notes - what you accomplished, what you learned, what broke
- Resolved issues - problems you've already solved (so AI doesn't suggest them again)
- File architecture - how the codebase is organized and key dependencies
- Testing workflows - validation approaches and current test coverage
- Key learnings - insights gained from debugging and implementation
The Workflow
After each coding session, I ask the AI to update CONTEXT.md based on what we accomplished. I review and commit those changes to version control. The version history becomes an engineering journal showing how the project evolved.
Some prompts I use:
- "Update
CONTEXT.mdto reflect the changes we just made" - "What's missing from
CONTEXT.mdthat would help you work more effectively?" - "Review this
CONTEXT.mdand tell me what we accomplished last session and what we should focus on today"
Context Engineering
Turns out that I'm not the only one thinking about this -- this seems to be part of "context engineering" - deliberately managing what information flows to AI assistants. Here are some related approaches worth exploring:
- AGENTS.md The most standardized approach, stewarded by the Agentic AI Foundation under the Linux Foundation. Adopted by OpenAI, Cursor, Google's Jules, and Factory. Think of it as a "README for agents" - a predictable place for AI coding agents to find project context and instructions.
- Donn Felker's llm-
CONTEXT.mdRemarkably similar to the approach I'm using! Donn usesllm-CONTEXT.md`` files that get updated by AI after each session. He also creates dated versions (llm-context-YYYYMMDD-01.md) for experimental work, which is brilliant for tracking alternative approaches. - Andre Figueira's .context Directory
A more elaborate system using a
.context/directory with separate files for different concerns (ai-rules.md, glossary.md, anti-patterns.md, etc.). Much more comprehensive but also more overhead to maintain.
Try It Yourself
Here's how to get started:
- Create
CONTEXT.mdin your project root - Start with basic sections:
- What you're building
- Current status
- Architecture overview
- Session notes
- After each AI session, prompt: "Update
CONTEXT.mdwith what we accomplished today" - At the start of each session, prompt: "Read
CONTEXT.mdand summarize where we are" - Commit changes to version control to track evolution
Still Exploring
The template is still evolving, and I'd love to hear what's working for you. Are you using AGENTS.md? A custom approach? Something completely different?
The beauty of this movement is that we're all discovering what makes AI collaboration more effective - and there's clearly no single right answer. The best approach is the one that fits your workflow and actually gets maintained.
What patterns have you found effective for maintaining context across AI collaboration sessions?