Happy Valentine’s Day! ❤️
This week's edition is different and essay-heavy because so much has changed recently, and that spurred a lot of debate and updates to how we work with AI.
I think it's worth taking the time to fundamentally understand and reflect on that.
The next edition will be more practical and illustrate some of these concepts (e.g. how I keep “one more prompt”-ing Claude Code so it runs in the background during my next meeting or break).
If you're looking for concrete tips and techniques, scroll immediately to:
“How to use AI today”
“Synthetic Personas for AEO”
-François

The cognitive overload of managing agent squads.
From “Prompt-and-Tweak” to managing agent teams
Agent-first engineers no longer write code. They supervise agent squads.
I was talking to a cybersecurity founder last week. He shared that he was exhausted. Not from writing code but from supervising and redirecting agents who code on his behalf. From cognitive overload.
His workflow has completely changed.
He spawns multiple AI agent squads at once. Each squad has a supervisor agent that plans and coordinates the work. Sub-agents handle specific tasks. When they're done, they ping him for review. He checks in, gives direction, and sends them back to work. Then another squad pings. Then another.
This evolution in software engineering happened in stages. First, AI helped with code autocomplete. Then it suggested a next edit. Then you could set a single agent on a specific task and review it when done. Now “agent-first” developers run full agent orchestration. Squads of agents with a planner at the top and specialized workers underneath.
This shift happened fast. It’s coming next to other knowledge work functions.
Prompt-and-Tweak is out. Agent orchestration is here.
Most knowledge workers today are still in prompt-and-tweak mode. Write a detailed prompt with instructions. Tweak the output. It may work for one-off tasks, but it doesn't scale well.
Agent orchestration is different. The shift has happened really fast because the models went from being very good at writing, to good at doing, to now good at planning and re-planning. The latest models can supervise work, assess quality, and learn on their own by observing which outputs succeed.
Here's what this could look like for a marketing campaign. The orchestrator agent starts with pre-research: audience analysis, competitive landscape, market signals. It helps you build the strategic brief (yes, I wrote strategic). Once you approve the direction, it delegates to specialized sub-agents. One handles copy. Another builds creative concepts. Another sets up email sequences. Another packages event materials. All working from the same brief, same context, same goals.
And just like software can be tested automatically, online campaigns can be A/B tested, with feedback loops that go straight back to these agents. They learn. They adjust. They improve.
Early adopters working agent-first are now operating at a completely different speed and scale. Those still in prompt-and-tweak mode, who aren't building up context, skills, and systems as they go, are falling behind.
Manage agents like directors: set vision and goals, define guardrails.
The old way was telling ChatGPT* or Claude specifically what we needed it to do and how.
Working with the main supervisor agent is like being a VP working with a director on their team. We're not dictating how to do the job. We're discussing strategy, reviewing plans, and adjusting together like a thought partner.
The real skill is managing these agent teams: building “the team”, leading, guiding, and only jumping in when needed. And when we see that the output is good, knowing when to tell the agent to save that to its memory and/or build and save a skill, informed by process, output and learnings. That's how we build up their capabilities over time. That's exactly what I told Claude to do when it built me a website I like: “Create a brand style guide: web page and markdown file. Save this to memory.”
“You or Me” in the loop, at the right time.
We also need to know when to intervene with the agents. Defining the right checkpoints to make sure strategy, direction, and execution don't drift.
This is what executives do every day:
Set vision and direction
Define values and guiding principles
Establish guardrails and constraints
Define operating priorities and key metrics
Let directors figure out tactics and execution
Judgment, taste, and experience - our unique killer skills?
We need to know when to trust the system, when to push back, and when to dig in.
We like to think judgment and taste are uniquely ours, as what separates us, obviously, from these agents.
But we're now observing AI showcase good taste as well - at least in software development: I noticed a notification email today that Claude had created for one of my apps. The layout was good, the copy was on point. I had not seen it. I even forgot that they had created it.
So the killer skill may actually be the combo of knowledge, judgment, taste, experience, AND leadership. Aren't these the exact traits of a good executive? So, is nothing changing? Or do we all have to become senior leaders, fast?
Experienced leaders have an edge (if they learn to manage these squads).
This is good news for experienced leaders. Executive thinking and leading: all the things that provide guidance and constraints at a high enough level of direction for "directors" to operate. Judgment and experience matter more than ever. That includes asking the right questions and anticipating issues before they compound.
Steve Yegge put it well: "AI is turning us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving."
How will juniors acquire the right experience?
If the execution layer is automated, how will juniors gain the experience that builds judgment, taste and business acumen? What only comes from doing the work, seeing what fails, and learning why.
That's the question none of us has fully answered yet. Should junior developers still write code to understand the process? Should entry-level marketers still write copy and emails after studying great writing skills?
I guess we'll figure that out. But we really ought to think about it collectively.
The essays that I've featured below will give you more food for thought, I hope.
*now that the Claude mobile app has a voice mode that is as good as ChatGPT’s, I'm cancelling my ChatGPT premium sub. Claude and Gemini are my LLMs.
My selection of tips, news and workflows
🔍 How to use AI today
The title of this podcast episode - "How to Learn AI with AI" - doesn't do it justice. It's packed with great tips on using agents and the latest AI models.
Here are his key tips:
Share your vision, goals, and perception of what exists. Give AI the full picture of what you're solving and why before you ask it to build anything. It saves hours of back-and-forth.
Think out loud, even with half-formed ideas. AI can handle messy thinking. Say "I'm not sure about this but..." and let it help you shape the idea instead of waiting until you have it figured out.
Let AI draft wide, then you react and filter. Ask for 50 or 100 options at once. It takes 30 seconds. You'll spot patterns and gaps faster than brainstorming one idea at a time.
Push back on AI outputs. Ask AI to push back on yours. AI says everything confidently. That doesn't make it right. Tell it to critique your plan from first principles. The best work comes from real back-and-forth.
Zoom out periodically to reground in the big picture. It's easy to get lost in the weeds. Every 20-30 minutes, pause and ask: "Does this still connect to what we're actually trying to build?"
Before ending a session, ask AI to write a handoff document. Capture what was decided, what was rejected and why, what's still open, and what comes next. Without this, your next session starts from zero.
Store handoff docs in your LLM's project setup for future sessions. Claude Projects, ChatGPT custom instructions, Gemini Gems. Use whatever your tool offers so every new conversation picks up where you left off.
🛠️ Synthetic Personas for AEO
Kevin Indig, in his excellent Growth Memo, explains the benefit of using synthetic personas + how to build them. Copying a lot of his good techniques below. More in his post.
“The shift: Traditional personas are descriptive (who the user is), synthetic personas are predictive (how the user behaves). One documents a segment, the other simulates it.”

And he goes on to explain how to build them.
“Building a synthetic persona has 3 parts:
Feed it with data from multiple sources about your real users: call transcripts, interviews, message logs, organic search data.
Fill out the Persona Card - the 5 fields that capture how someone thinks and searches.
Add metadata to track the persona’s quality and when it needs updating.
The mistake most teams make: trying to build personas from prompts. This is circular logic - you need personas to understand what prompts to track, but you’re using prompts to build personas. Instead, start with user information needs, then let the persona translate those needs into likely prompts.
Data sources to feed synthetic personas:
The goal is to understand what users are trying to accomplish and the language they naturally use:
Support tickets and community forums: Exact language customers use when describing problems. Unfiltered, high-intent signal.
CRM and sales call transcripts: Questions they ask, objections they raise, use cases that close deals. Shows decision-making process.
Customer interviews and surveys: Direct voice-of-customer on information needs and research behavior.
Review sites (G2, Trustpilot, etc.): What they wish they’d known before buying. Gap between expectation and reality.
Search Console query data: Questions they ask Google. Use regex to filter for question-type queries: (?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|lists?|comparison|vs|difference|benefits|advantages|alternatives)\b.* (I like to use the last 28 days, segment by target country)
Persona card structure (5 fields only - more creates maintenance debt):
These 5 fields capture everything needed to simulate how someone would prompt an AI system. They’re minimal by design. You can always add more later, but starting simple keeps personas maintainable.
Job-to-be-done: What’s the real-world task they’re trying to accomplish? Not “learn about X” but “decide whether to buy X” or “fix problem Y.”
Constraints: What are their time pressures, risk tolerance levels, compliance requirements, budget limits, and tooling restrictions? These shape how they search and what proof they need.
Success metric: How do they judge “good enough?” Executives want directional confidence. Engineers want reproducible specifics.
Decision criteria: What proof, structure, and level of detail do they require before they trust information and act on it?
Vocabulary: What are the terms and phrases they naturally use? Not “churn mitigation” but “keeping customers.” Not “UX optimization” but “making the site easier to use.”
🛠️ Recent improvements and vibe working - Kieran’s newsletter
Good overview in Kieran's latest newsletter of the recent progress.
Key quote:
Anthropic’s Scott White put it well: “We are now transitioning almost into vibe working.” Not vibe coding. Vibe working. You describe the outcome. The AI does the work.
💡 Weekend reading: Three thought-provoking essays about AI, acceleration, exhaustion and job displacement
First, good insights in this super viral article by Matt Shumer about the recent acceleration and implications
The key point, which I have experienced myself in the last two weeks:
“The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have.”
“I know this is real because it happened to me first
On February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest (F’s note: I have the same exact impression, and most of the founders and software leaders I've talked to share it too)
I am no longer needed for the actual technical work of my job.
I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done.
A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
I tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.
I'm not exaggerating. That is what my Monday looked like this week.
It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have.
The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself.
"But I tried AI and it wasn't that good" » I hear this constantly. I understand it, because it used to be true.
Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone.
I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.”
Second, this rebuttal argues that job displacement will take place more slowly than the first essay predicts.
“I honestly don’t think that we’re going to see mass unemployment, or the sudden death of human cognitive labor, or anything that feels like an “avalanche.” The years to come will be weird, especially if you’re keeping abreast of the latest developments in AI. But the actual impacts of AI in the real world will be a lot slower and more uneven than people like Shumer seem to think.
The most important thing to know about labor substitution, the place where any serious analysis has to start, is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process. That’s a very different question. AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.” - David Oks
My take: AI and agents can spread fast and do a lot autonomously when they can test and validate their output on their own and/or have access to systems to do that (human farms to validate AI work anyone? Well that's pretty much what ScaleAI and Mercor are...). That's true in software: you can automate testing. That's true in online campaigns: you can run A/B tests. It's not true in many other fields, where a human must be the judge and validator of the output… And is liable for the outcome and its impact.
Third, this good essay - The AI Vampire - about AI-induced exhaustion.
“The world is accelerating, against its will. I can feel it; I grew up in the 1980s, when time really did move more slowly, in the sense that news and events were spaced way out, and society had time to reflect on them. Now it changes so fast we can’t even keep up, let alone reflect.
I’ve been watching the effect the AI Vampire is having on people around me and I’m growing concerned. We’re all excited, but it’s also… weird.(…)
The developing situation is a multi-whammy coming at developers from all sides:
Crazy addicted early adopters like me are controlling the narrative.
You can’t stop reading about it in the news; there’s nowhere to hide from it.
Panicking CEOs are leaning in hard to AI, often whiplashing it into their orgs.
Companies are capitalistic extraction machines and literally don’t know how to ease up.
So you’re damned if you do (you’ll be drained) and you’re damned if you don’t (you’ll be left behind).
(…)That’s a race that ends, in my opinion, with everyone collapsing in exhaustion without actually winning the race.”
Final Words
But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal. And unfortunately, all your other tools and models are pretty terrible in comparison.
Thanks for sharing these highlights with busy marketing execs around you.🙏
Someone forwarded you this email? You can subscribe here.
François | LinkedIn
I'm a CMO, advisor, and "CMO Wingman". Yes, that's a thing :-). Ask my clients: in this AI era, CMOs need a strategic proactive advisor more than ever. I’m former CMO at Twilio, Augment Code, Apollo GraphQL, Decibel, Udacity and Head of Marketing for LinkedIn Talent Solutions.
