Welcome, Builders
I posted something on X yesterday week that's been rattling around in my head:
"Soon every skill, tool, and product will exist in two versions — one optimized for humans, one for the agents working on their behalf."
Aaron Levie said it in the context of enterprise AI. I think the shift is bigger than enterprise.
Here's what I mean.
🔥 FUEL — The Two-Audience Shift
When you publish documentation today, who reads it?
A human, sure. But also: the AI assistant they asked to summarize it. The agent pulling context before executing a task. The tool that loads your content into memory every single session.
The human reads once. The agent reads every turn.
Your content, your products, your skills — if they're not built for both readers, you're optimizing for half the audience.
I'm not theorizing. Last week I shipped v3.1.0 of my AI marketing skills library. Every one of the 30 skills now ships in two files:
SKILL.md — full detail, verbose context, examples. For Claude Code, Cursor, humans who want the complete picture.
SKILL-OC.md — condensed, step-only, token-efficient. For OpenClaw agents that load it into live context on every turn.
Same intellectual property. Two formats. Two different readers.
Paul Graham built for developers. Peter Yang is now asking: what does it look like to build for the agents developers deploy?
The answer: you write for both. You ship both. You stop pretending your only reader is human.
Try it yourself. Grab any doc, SOP, or guide you've written and run it through this prompt. You'll have a dual-mode version in 5 minutes:
I'm going to give you a piece of source content — a document, guide,
skill, SOP, or knowledge base article.
Your job: produce two versions of the same content.
---
VERSION 1: HUMAN.md
Purpose: A human is reading this to learn, understand, and implement.
Rules:
- Include full context and background ("why this matters")
- Add examples for any step that could be misunderstood
- Use headers, bullet points, and whitespace for scanability
- Explain the reasoning behind decisions, not just the steps
- Include edge cases, common mistakes, and "watch out for" notes
- Write in clear, direct language — no jargon without explanation
- Target length: as long as it needs to be for full understanding
---
VERSION 2: AGENT.md
Purpose: An AI agent loads this into its context window every session
to execute tasks. Every token matters.
Rules:
- Steps and actions only — no background, no "why," no motivation
- No examples unless a step is ambiguous without one
- Use terse, imperative language ("Do X" not "You should consider X")
- Compress lists into single lines where meaning is preserved
- Remove all filler: "Note that," "Keep in mind," "It's important to"
- Remove section intros and transitions — go straight to instructions
- Use consistent formatting: numbered steps for sequences,
bullets for options
- Include exact values, names, and paths — never say "as appropriate"
- Target length: 40-60% shorter than HUMAN.md
---
SOURCE CONTENT:
[Paste your content here]
Copy it. Use it. Start shipping for both audiences.
🎯 FOCUS — The System Test
I was in New Orleans last week for a conference during my first week of my new job. Three days of sessions, handshakes, and less time at a laptop.
Back home (and in the cloud), my agent work and content pipeline didn't notice.
My agents drafted content. Research queued overnight. Morning brief delivered every day at 6am. None of it waited for me to sit down and type a prompt.
That's not discipline. That's simplly architecture.
Most people use AI reactively — when they remember to. A real system runs whether you're at your desk or in a convention center. The difference isn't which tools you use. It's whether you built a system or developed a habit.
Habits require your presence. Systems don't.
Run this audit on your setup — 3 questions:
1. Does it run when you don't tell it to?
If your AI workflow only fires when you open the app and type a prompt, you have a habit. Scheduled agents, crons, and queues turn habits into systems.
2. Does it hold context across sessions?
Most AI conversations start from zero every time. A system carries memory — voice profile, queued tasks, project context — so agents pick up where they left off without a briefing.
3. Could someone read the output without knowing you were gone?
The real test. If the answer is yes, you've built something that works independent of your presence.
If any answer is "no" — you have a tool, not a system.
Want to run the audit for real? Paste this into Claude, ChatGPT, or your agent of choice:
Audit my current AI workflow against 3 criteria. For each one,
score me 1-5, explain why, and give me one specific action
to move the score up by 1.
CRITERIA 1 — AUTONOMY
Does any part of my workflow run without me manually starting it?
(Scheduled tasks, crons, triggers, automated pipelines count.
Opening an app and typing a prompt does not.)
CRITERIA 2 — MEMORY
Does context persist across sessions? Can my AI pick up where
it left off without me re-explaining my brand, voice, goals,
or current projects?
CRITERIA 3 — INDEPENDENCE
If I disappeared for 72 hours, would any output still ship?
What breaks first — and what keeps running?
Here's how I currently use AI:
[Describe your setup — tools, frequency, what you use AI for,
any automation you have in place]
🛠️ BUILDER'S NOTES
$500 Challenge — Day 15: $415 / $500

Fifteen days in. $85 from the goal.
The interesting part isn’t necessarily the number, it’s how this continues to grow without any real marketing pushes.
It’s a library that's indexed. Listings that are live. Content that keeps working after you stop touching it.
What shipped this week:
Skills v3.1.0 — all 30 skills backfilled with dual-mode format (SKILL.md + SKILL-OC.md)
The dual-mode standard isn't a formatting decision. It's the two-audience principle in practice. Every skill I ship now has to work for the person reading it AND the agent executing it.
I expect everything we build will work this way within 18 months. We're doing it now.
📡 SIGNAL BOOST
Worth your attention this week:
Aaron Levie — "Building for trillions of agents" — where this week's FUEL started. Enterprise framing, but the insight scales down to solo builders.
Peter Yang (Product Compass) — "Make something agents want" — riffing on Paul Graham's "make something people want." The shift from humans-as-only-audience to humans + their agents. Read this one slowly.
Dan Shipper (Every) — Published a beginner's guide to OpenClaw. Good primer if you're curious about the infrastructure behind agent skills.
The shift is already here.
Most people are still building for one audience. The ones who figure out the second one early — who design for the agent reading their work as intentionally as for the human — will have a real edge.
I'm betting you're one of them.
See you next week.
Brian
P.S. — The $500 Challenge is moving along. If you want to see what 30 AI marketing skills look like — built for humans AND agents — check them out on Claw Mart or Gumroad. Every purchase gets you closer to watching me hit this thing live.
And as always, FREE skills can be found over on GitHub for both Claude Code and Openclaw.
