This is an entry in Conversations with Robots, a series of prompts and responses from conversations with large language models.


Prompt:

I’ve been thinking about milestones in AI adoption lately, and one axis along which I’m seeing my own growth is

  1. Writing instructions for an agent
  2. Writing instructions for multiple agents and how they coordinate
  3. Writing instructions for an agent for it to interrogate me on how to write instructions for multiple agents
  4. Writing instructions for an agent to create instructions for other agents based on a set of guidelines and directions

What are your thoughts?

Response:

This is a really interesting progression you’ve laid out, and I think it maps well to how people naturally level up with AI tooling. A few thoughts:

What you’re describing is essentially climbing an abstraction ladder — moving from direct execution to orchestration to meta-cognition to generative delegation. Each step requires you to think less about what to do and more about how to specify what good looks like.

The jump from 1 to 2 is where most people are right now — figuring out multi-agent workflows, tool chaining, MCP coordination, that kind of thing. You’re clearly living this with your Chamber CRM work and the various MCP integrations you’ve been building.

The jump from 2 to 3 is genuinely interesting because it’s where you start treating the AI as a thought partner for system design rather than just an executor. Instead of “do this thing,” it’s “help me figure out what things need doing and how.” That’s a qualitatively different relationship.

And 4 is where it gets spicy — that’s basically building an AI-powered management layer. You’re defining the principles and constraints, and the agent is doing the decomposition and delegation design. It’s the difference between being an individual contributor, a tech lead, and an engineering director… which tracks with your actual career arc.

A few things I’d add to the framework:

There’s a step 0 most people skip — learning to write clear instructions for yourself. People who are good at documenting processes, writing runbooks, and creating clear specs tend to leapfrog into effective agent use faster. It’s not a coincidence that your experience writing handoff docs and OKR frameworks translates directly.

The bottleneck shifts at each level. At level 1, the bottleneck is prompt craft. At level 2, it’s interface design between agents. At level 3, it’s knowing what questions to ask yourself. At level 4, it’s having a clear enough mental model of quality that you can encode it as evaluable criteria.

There’s arguably a level 5 — writing instructions for an agent to evaluate and improve the instructions it generates for other agents. That’s where you get self-improving systems, and it’s also where the governance and guardrail questions get really important.

Where are you finding yourself spending most of your time right now? My guess is you’re oscillating between 2 and 3, especially with the MCP server work and the experimentation frameworks.