If you only check one highlight this week:
B2B CMO? How to avoid falling behind, followed by 10 lessons for building agent teams (ok, that’s two things)
Everyone else: Should you use ChatGPT or Claude?
-François
PS: Welcome Marie! My sister is now a subscriber, so I’m testing if she’s actually reading ;-)

How Marketing Leaders can avoid falling behind and build their personal brand
Ethan Mollick, Wharton professor and one of the most-followed voices on AI, published an essay worth reading in full this week: The Shape of the Thing about where AI capabilities stand and what it means for team leaders.
The summary (with Claude’s help):
AI has moved past the "prompt and respond" phase. The new model: give an AI agent hours of work, get results back in minutes.
Capability benchmarks are accelerating fast. AI now scores 94% on tests where graduate students using Google score 34% outside their field.
A three-person team at StrongDM built a Software Factory where AI agents write, test, and ship production software. Two rules: no human writes code, no human reviews code.
Leaders should see more pressure to streamline their teams after that “2028 AI doom” fictional analyst report moved markets. Block then saw its stock price rise after it announced 40% layoffs, blaming AI.
Every major AI lab now has recursive self-improvement on its roadmap. AI building better AI. The curve may get even steeper.
There are too few role models so far. Every organization figuring out how to use AI right now is setting a precedent for everyone else.
My take on what this means for marketing leaders:
1- Our Marketing teams will likely get smaller. Let’s build the right structure & processes before someone else makes us.
The StrongDM example showcases a three-person team shipping production software using AI agents in a loop, without writing any code or doing code reviews (!!).
Marketing has the same dynamic coming in some areas, faster than we may expect.
Example? One person overseeing AI agents that draft, iterate, test, and report. Campaign briefs turned into copy, ads, landing pages, and performance summaries, with a senior marketer reviewing the output and adjusting the agents, context and instructions. The volume of work increases, while the headcount doesn't, or diminishes.
The CMOs who wait to see how this plays out will face headcount decisions made for them by their CFO or CEO.
Those who start now will already know where an agent's output quality is good enough vs where humans still add unique value (judgment, relationships, positioning, strategy, creativity).
Identify which roles and workflows on your team create high volume output vs. use judgment, leadership & creativity
Run 2 to 3 new agent-first workflows this quarter, discuss what breaks/what works with your team, and iterate
Start building your vision and capabilities for a smaller, nimbler team
2- Let's keep adjusting our point of view on AI capabilities
Most of us formed our first view on AI in 2023. But AI capabilities aren't improving at a steady rate anymore; they are accelerating. And the big leap doesn't always seem obvious until you read more reports or experience it for yourself in advanced workflows
A CMO who tried AI copywriting in 2024 and found the output weak may have written off a tool that performs at near-human level on complex tasks today.
We should not make budget decisions, hiring decisions, and strategy calls on a capability snapshot that no longer exists.
Let’s keep updating our mental models as that curve moves. Let’s test AI on tasks we dismissed six months ago.
Keep reading about the latest model and tool capabilities. There are many great resources. I personally enjoy reading Jess Leao's weekly recap and listening to the AI Daily Brief.
Test AI on two to three tasks you dismissed in the past year.
Ask your team to do the same.
3- Let’s set precedent, learn, have fun, and share with others!
Every organization figuring it out right now is setting the reference point for everyone else.
The CMO who can describe exactly how their team runs agent-assisted campaigns, where the humans stay in the loop, where they don't, what broke, and what worked, becomes the person other marketing leaders and CEOs will call.
Document the workflows - those with and those without AI in them - especially the messy, repetitive and manual ones. Share what's working internally, then please also share externally (I’m happy to re-share them here). Your team's AI standard operating procedures and techniques will be a publishable asset that will build your company's and your own talent brand. Not enough CMOs are treating it that way yet (Kip B and Kieran F from HubSpot are really good at doing that).
Document your AI workflows before you think they're ready to share
Think of your operating model as talent brand content, not just process
The CMOs who share great case studies will define what modern marketing looks like
Let's go do it! 👊🔥
Should you use Claude or ChatGPT?
BOTH are great. Both have many more capabilities than most people realize or use. When I wrote earlier that I was switching to Claude, you need to know that I already had most of my work context in Claude, and I have a strong affinity for that brand. This is where I curated context for each one of my clients. Therefore, what I had in ChatGPT was not essential.
I will be focusing more on Claude going forward because that's what I use most, but that doesn't mean you should switch. If you've got most of your context in ChatGPT or Gemini, what matters is really deeply understanding a tool and curating your context and its memory there as much as possible.
The models themselves are very similar. Their interfaces are getting more powerful, especially for setting up and managing agents (e.g. OpenAI’s Codex 3).
Remember that Claude does not generate videos, images, or voice (but Claude Code can orchestrate tools that do that, if you give it API access).
➡️ Want to switch from ChatGPT to Claude?
Anthropic just tried to make it easy for us to do that, if you don't have too many GPT's or project folders in ChatGPT. Here's the step-by-step. It’s pretty basic, but it’s a start.
My selection of tips, news and workflows
💡When to use an agent or build a workflow vs. do-it-yourself?
I like these thoughts from Emma (LinkedIn post screenshot), Head of Communications and Content at Augment Code. She is right that we have some specific skill sets that we are better off not delegating to AI or agents. It's important to think about which ones are yours.

Sachin Rekhi also gives good pointers:

🤖 Using Claude Code - a masterclass by a Product leader turned founder
Sachin Rekhi, CEO at Notejoy and instructor at Reforge, gave a 60-min master class showing how he uses Claude Code for Product work.
Many of these workflows are highly relevant to CMOs and product marketers, and likely to content marketers as well.
He uses Claude code in the terminal, but you can use it in the desktop app; it's easier. He uses VSCode for file management and WisprFlow for dictation.
🤖🤖🤖🛠️ Ten lessons for building agent teams
Designing and managing agent teams will be a critical skill set.
You will not need to use OpenClaw or anything complicated in a few weeks to do that, since our favorite AI tools will make it progressively easier, soon (see the Perplexity Computer or Replit Agent 4 announcements). But it's worth already thinking about the following, covered in this podcast (Apple, Spotify) by NLW, who shares lessons from the OpenClaw hype, including task separation, security, and memory.
Summary of key lessons (with Claude’s help):
Everyone will need to be an AI builder. At Linear, designers and PMs work directly in the code base using agents. 80 to 100% of work goes through a chat interface. Speed compounds when everyone builds.
Build an AI fluency performance assessment system. Ramp uses four levels: L0 (disengaged), L1 (competent user), L2 (non-technical builder), L3 (technical builder). L0 will be grounds for dismissal. Goal: 25% L1, 50% L2, 25% L3 by end of year. Yes, sounds brutal, but it's the new reality.
Give full context to agents as if they were employees. Add agents to projects, assign them issues, mention them in comments. They need full context. Context gaps cause costly agent mistakes.
One agent per task. One massive prompt doing six jobs degrades quality fast. Context fills up. Separate agents for research, writing, reviewing, and publishing each perform better.
Agents get their own world. Dedicated machine, dedicated email, scoped API keys. Agents see only what you forward to them. Never give them access to personal accounts from day one.
Coordination is the file system. No middleware. Agent A writes to a markdown file. Agent B reads it. That is the handoff. Files do not crash. Files do not need authentication.
Program memory explicitly. Agents start every session fresh. You must build a system for logging and recalling important context. Memory is not automatic. It is a design decision. Without it, agents repeat past mistakes.
Use skills. Skills are plain markdown files that teach an agent how to do something. Write your own or browse 86,000+ on skills.sh (⚠️ careful with security. Make sure you absolutely trust the author). Brand guidelines, process rules, communication standards all qualify. Skills make agents follow your standards or those you like.
Match model to task cost. Do not run Claude Opus 4.6 (expensive) on a repeat job that runs simple tasks. Use cheap models for monitoring and scheduling. Save expensive ones for writing, research, and judgment calls.
Break the frame. When agents loop on the same ideas, force a reset. Challenge the agent. Tell it to try the opposite approach. Bring in the humans.
As a complement, here are Anthropic's five principles for safe and trustworthy agents (August 2025):
Keep humans in control while enabling autonomy: read-only permissions by default, human approval before system-modifying actions, graduated trust
Transparency in agent behavior: real-time task checklists, users can inspect and adjust plans mid-execution
Align agents with human values: research on agentic misalignment, extreme scenario testing
Protect privacy across extended interactions: MCP access controls, enterprise admin connectors, data segregation
Secure agent interactions: constitutional classifiers for prompt injection, threat intelligence monitoring, multi-layer security
🛠️ Marketing inspiration: Surge AI’s website - bringing humanity to AI
When many AI infrastructure companies' websites are technical and use a dark background, Surge AI is a welcome counterpoint, bringing humanity, their differentiator, to the fore:
a homepage in the simple form of a manifesto
a model benchmark/leaderboard, the Hemingway Bench, that is all about the art of writing, prominently displayed in their nav
a celebration of what humans are so good at, to highlight the talent they introduce to their AI labs customers to train their models.


📺 NotebookLM generates impressive videos now
NotebookLM is one of the best tools for learning new concepts. It could already create your own interactive podcast, infographics, slides, and written reports.
Google added impressive video capabilities. Check it out!
Final Words
These are AI systems that you can just give work to, sometimes hours of human work, and get back reasonable and useful results in minutes. This is an era of managing AIs, rather than working with them.
Thanks for sharing these highlights with busy marketing execs around you.🙏
Someone forwarded you this email? You can subscribe here.
François | LinkedIn
I'm a CMO, advisor, and "CMO Wingman". Yes, that's a thing :-). Ask my clients: in this AI era, CMOs need a strategic proactive advisor more than ever. I’m former CMO at Twilio, Augment Code, Apollo GraphQL, Decibel, Udacity and Head of Marketing for LinkedIn Talent Solutions.

