This post is the first in a series about taking your AI coding agents to the next level.
The learning process usually starts with “vibe coding”. You launch an AI coding agent, give it a simple prompt, and look at the result. It’s generally not great, so you invest the time to learn how to give better prompts. Maybe you even take the time to learn a pattern like Spec Driven Development. The quality of the output improves. It’s relatively usable. And that’s where most developers stop. They’re maybe using AI coding agents at 20% of their potential. This series tries to fix that, one layer at a time. AI agent instruction files are the foundation on which everything else builds.
Defining the Problem
AI coding agents have a lot of promise, but they often miss the mark. They only live up to a small percentage of their potential as partners and pair-programmers. The problem is that “out of the box”, AI coding agents are what we might consider generalists. They know a little about a lot of things, but a lot of that knowledge is basic, and stale.
This limitation is similar to a human developer who fails to keep their skillset sharp or focus on a specific domain. Their code is functional - it builds, it runs, it works - but it lacks effectiveness, high quality, or efficiency.
Thankfully, there is a lot we can do to improve that. We can focus, hone, and improve the skills of our AI partner. One of the first ways we can do that is with instructions files.
Why Is An Instructions File Useful
AI agents have no memory. They don’t know your codebase or the conventions that you have for that codebase. As a result, every time you start a “conversation” with them, they have to include all those details as part of the context. If you don’t explicitly explain those details each time, they make their best guess based on the existing codebase and what they can see. Those generated assumptions will vary from request to request. The result is highly inconsistent output. Not only is the output inconsistent, but the cost of generating those assumptions each time is high.
Compare it to onboarding a new developer. What would be the cost if you had to re-explain the codebase and your coding standards to that developer every single time you gave them a new task to work on? What if, even worse than that, the new developer forgets everything you told them last time and you have to start over from scratch every single time you assign a new task. You don’t want to have to do that. So you create a reference document that the new developer can refer to so you don’t have to go through that.
And that’s exactly how AI coding agents are. They literally forget every single thing you told them last time and you have to start over. So you create a document with all of that information in a concise format so you don’t have to provide those instructions each time. That’s exactly what an instructions document is.
The Landscape
Unfortunately, the AI coding agents landscape is strewn with competing standards. While some of it is coalescing into coherent standards, there’s still a lot of variability. The exact method of implementing an instructions file varies depending on the agent you are using. Some agents have learned to look for files that are created for other agents, but it’s not consistent by any means. Let’s start with a quick overview of the file names and locations for some of the more common agents out there.
| Agent | FileName | Location & Scope |
|---|---|---|
| Claude | CLAUDE.md | Placed at the project root or subfolder, it provides instructions for that folder and any subfolder. Placed at ~/.claude/ it provides global instructions that will apply to every work task, regardless of project |
| OpenAI Codex/ChatGPT | AGENTS.md | Placed at the project root or subfolder, it provides instructions for that folder and any subfolder. You can also add a global AGENTS.md file in the user’s home directory. |
| GitHub Copilot | .github/copilot-instructions.md | This is a single file placed at the project root and scoped to the entire project. You should not create multiple copilot-instructions files. However, Copilot will also honor the existence of any AGENTS.md or CLAUDE.md files that it finds in the project and will try to honor the scoping of those files based on their locations. It should be noted that the scoping of files found in subfolders is an experimental feature and not always consistently honored at the moment. You can also provide organization level Copilot instructions via the web interface for your GitHub organization. |
| Cursor | .cursor/rules | You create multiple .mdc files in this directory to handle different contexts (e.g., frontend.mdc, api-design.mdc). While you cannot create a global file, you can define global rules in the Cursor application. |
With all of these agents, the rules in the closest instructions file take precedence. So, if you have a specific sub-project or folder where you want the rules to be different than at a higher tier, you can do so.
What Belongs in an Instructions File
So now we know where to put the file and what to name it. The next question is: What do we put in our instructions file?
An instructions file is a markdown text file. Aside from that, the contents are free-form. You can put any instructions into the file that you want the AI coding agent to abide by. But generally speaking, there are a few key categories of instructions that can contribute to making the file effective.
- Project architecture - Define the preferred solution structure, the module boundaries, and the key abstractions to use
- Coding conventions - naming, file organization pattern, and coding practices and patterns to follow. For example: “Prefer the use of Task.WhenAll instead of Parallel.ForEach”.
- Testing expectations - define what framework (xUnit, MSTest, etc), patterns, expected code coverage, what to test, etc
- Libraries & packages - define the preferred NuGet packages to use, and explicitly list any packages that should not be used, and define why. For example: “Use Ardalis.GuardClauses, not manual null checks”.
- Off-limits patterns - define anti-patterns, deprecated approaches, architectural violations, and things that should never be done.
- Build & run commands - Explain how to build, test, and run locally. Agents use these instructions to help verify results.
- Domain vocabulary - define the ubiquitous language and project-specific terms. This helps the agent better understand instructions you provide without you needing to explain each time.
What Doesn’t Belong
There are certain things you should not include in your instructions file:
- Sensitive credentials or secrets - Just as you should never put such information in your appsettings files, remember that these agent instruction files are checked in to your repo along with all the other code. It’s not a place for sensitive information.
- Excessively long prose - These instructions files are included with the context that the agent sends up into the cloud with each prompt. Keep your text short and to the point. Explain the need in as few words as possible.
- Information already understood - You don’t need to define things like basic, common syntax. Even a generalist AI agent understands how to write a for loop.
- Stale or speculative guidance - Keep your document up to date. If something no longer applies, remove it. Review the file regularly. Don’t include guesses or vague information either.
Scoping Instructions: Global vs Project vs Directory
If you are using scoped instructions, maintain a clear separation of what instructions are detailed in each file. Do not overlap or repeat yourself. Consider carefully where an instruction belongs, and at what level it applies, before you decide which file to add the instruction to.
For instance, instructions regarding formatting and verbosity are typically personal preferences and not project specific and would belong in a global file. Team conventions would belong at a project or organization level. Targeted context items belong in a folder specific to that context. For example, rules regarding payment systems might be scoped to a domain subfolder.
I should point out one variation here. Cursor provides the ability to scope their instruction files to specific files or folders using glob patterns. So, instead of putting the file in the folder it applies to, all the files go in the .cursor/rules/ folder, with the instructions in the file detailing what it applies to.
---
description: Rule description
globs:
- src/components/**/*.tsx # Applies only to TSX files in components
- "**/__tests__/**" # Applies to any test directory
alwaysApply: false
---
# Instructions
- Use functional components.
- Use Tailwind for styling.
Keep Your Instructions Healthy
I mentioned before that you should regularly review your instructions files. Treat them as living documents and update them whenever conventions, patterns, or rules change. At the very least, review them every 6 months. Any time you start a new project or onboard a new team member is typically also a great time to review the documents and ensure they are up to spec. Another great time to review them is when there is a major release of code libraries you use. For instance, the annual release of the new version of .NET in the fall. They should also be reviewed any time you make any sort of architectural change to a project.
As you review them, ask yourself: Do these rules still reflect the reality of our coding environment? Have we evolved our standards? Added new standards? Is there anything we do regularly as a development team that isn’t reflected in these instructions?
Don’t be afraid to update the instructions to ensure they match that reality. Treat them like code, not like documentation. Make sure that like any code in your project, any changes to the instructions files should undergo a full code review process. Especially check to ensure that updates are scoped properly. You don’t want personal preference instructions being added to the project instructions, or business logic rules enforcement being added to the API project instead of the domain project.
Practical Examples
It’s helpful to look at a couple of examples of how to structure and write these instructions files.
# Project Instructions for Claude
## Solution Structure
- `src/` — application code organized by module (e.g., `Orders`, `Catalog`, `Identity`)
- `tests/` — mirrors `src/` structure; unit tests live next to the module they test
- `docs/` — architecture decision records (ADRs) and diagrams
## Architecture
- This is a modular monolith. Module boundaries are enforced — do not reference one
module's internals from another. Use shared kernel types in `src/SharedKernel/`.
- Domain logic belongs in the Domain layer. No EF Core, no HTTP clients, no logging
in domain classes.
- Application layer orchestrates use cases via command/query handlers (MediatR).
## Coding Conventions
- Use `Result<T>` (from Ardalis.Result) for operation outcomes. Do not throw exceptions
for expected domain errors.
- Guard clauses go at the top of methods using `Ardalis.GuardClauses`.
- Prefer records for value objects. Implement `IAggregateRoot` on aggregate roots.
- Async all the way down. No `.Result` or `.Wait()` on tasks.
## Preferred Packages
- `Ardalis.Result` — operation results
- `Ardalis.GuardClauses` — input validation
- `FastEndpoints` — API endpoints (not MVC controllers)
- `xUnit` + `FluentAssertions` + `NSubstitute` — testing
- `Verify` — snapshot testing for complex outputs
## Testing
- Every use case handler must have unit tests.
- Use the Arrange / Act / Assert pattern with a blank line between each section.
- Name tests: `MethodName_Condition_ExpectedResult`
- Integration tests use `WebApplicationFactory` and live in `tests/Integration/`.
## What NOT to Do
- Do not use `MediatR` notifications for intra-module communication — use direct
service calls. Notifications are for cross-module events only.
- Do not add new NuGet packages without a comment explaining why.
- Do not use `var` when the type isn't obvious from the right-hand side.
## Build & Verify
- Build: `dotnet build`
- Test: `dotnet test`
- Always run tests before marking a task complete.
# Agent Instructions
## Who This Is For
AI coding agents working in this repository (Codex, ChatGPT, Copilot, etc.).
Read this file before making any changes.
## Project Summary
ASP.NET Core 9 modular monolith. Domain-Driven Design. CQRS via MediatR.
FastEndpoints for HTTP layer. PostgreSQL via EF Core 8.
## Key Conventions
### Naming
- Handlers: `CreateOrderHandler`, `GetOrderByIdHandler`
- Endpoints: `CreateOrderEndpoint`, `ListOrdersEndpoint`
- Value objects: sealed records in the domain layer
- Interfaces: `IOrderRepository`, not `OrderRepositoryInterface`
### File Layout
New feature in the `Orders` module:
- src/Orders/
- Domain/ ← entities, value objects, domain events
- Application/ ← handlers, DTOs, validators
- Infrastructure/ ← EF config, repositories
- Endpoints/ ← FastEndpoints request/response/endpoint classes
### Error Handling
Return `Result` or `Result<T>` from handlers. Map to HTTP status in the endpoint:
- `Result.NotFound()` → 404
- `Result.Invalid()` → 400
- `Result.Success()` → 200/201
## Before You Submit
- [ ] `dotnet build` passes with no warnings
- [ ] `dotnet test` passes
- [ ] No new `// TODO` comments left behind
- [ ] No secrets, connection strings, or API keys in code
As you can see, these are standard markdown files and can be easily read by both human and AI coding agent developers. The details cover a wide range of instructions, the same as you might explain to a new developer joining the team.
Conclusion
AI coding agents are only as good as the instructions and context they are provided. Instruction files are a force multiplier. As a first step toward improving the effectiveness of your agents, a small investment can bring compounding returns in their effectiveness. The best AI coding agents are ones that know your standards and practices as well as any senior level team member.
So take the time to audit your current AI coding agent usage. Are you using them effectively? Are you providing those proper instructions to give them the context they need to be a true pair-programmer partner? Or are you starting from scratch every time?
Next time we’ll take a look at skills files and how we can help AI coding agents specialize in particular areas.

