Writing Better Prompts for AI Agents

After building AI agents for the past year, I've learned that prompt quality is the single biggest lever for reliability. A well-crafted prompt can turn a hallucinating chatbot into a dependable tool.

The System Message Is Everything

Most people focus on the user message. That's backwards. The system message sets the entire behavioral boundary. Think of it as the agent's job description, unclear expectations produce unclear results.

Three rules I follow:

  1. Define the role explicitly. "You are a senior Python developer reviewing pull requests" beats "You are a helpful assistant."
  2. State constraints upfront. "Only suggest changes that affect correctness or security. Ignore style preferences."
  3. Specify the output format. "Return a JSON object with keys: summary, issues, and recommendation."

Chain-of-Thought Actually Works

Telling the model to "think step by step" isn't cargo-culting, it measurably improves accuracy on reasoning tasks. But the magic is in how you structure it:

Analyze this code for bugs. Follow these steps:
1. Trace the data flow for each input
2. Check edge cases (null, empty, boundary values)
3. List potential issues with severity (critical, warning, info)

Explicit steps beat vague instructions every time.

What Doesn't Work

  • Long preambles about "being helpful and honest", the model already defaults to this
  • Asking for "best practices" without context, what's best depends on the situation
  • Roleplaying elaborate scenarios, the model doesn't need a backstory

The Pattern I Use Most

After many iterations, here's the template I default to:

System: You are [specific role]. [Key constraint]. [Output format].

User: [Context: what I'm working on, why it matters]
[Task: specific, actionable request]
[Constraints: what NOT to do]

Simple. Specific. Testable. That's the whole game.