Don't settle for AI Slop. Here's how

Build AI content critics | Get distinctive AI output | Master Claude Skills

Hello friends!

Some tactical tips to improve our own output and our favorite AI's output.

If you only check one highlight this week: creating your AI graders.

Thanks for reading and forwarding!

-François

Creating Our Content Graders

Are you constantly wondering whether what you read was generated with AI (you may be wondering just that right now*)?

Yep. Me too...

That means our bar for authentic quality content has gone up. Or so I hope.

So, what if, instead of letting AI create our content, we leaned on experts and our own taste and judgment to improve our content?

Alas, not everyone can afford a college coach to critique their essays or a cybersecurity expert to review their technical content.

The good news is that we can make AI act as the expert we need, giving us feedback.

The trick lies in creating the right expert for your exercise, your audience, and your style. And in never blindly trusting the recommendations.

I've recently created “AI graders” to help others - including my daughter - review and improve the content they create in seconds, in the form of custom GPTs (you can also use Claude projects or Gemini Gems).

The GPTs don't create the content for them. They are critics who grade content quality for their audience, against a detailed rubric, and give them tips and recommendations for improvement, including rewrite options and rationales.

The human is responsible for accepting or rejecting each recommendation and reworking the content.

What I share below is intermediate-level. You can get more advanced with loops, or by tailoring prompts to a specific model, or by using Claude Skills, for instance (see the last section).

Here's how it works:

Note that “AI” below represents your favorite AI. Using a single platform will save you time and aggregate context in a single tool.

1- Create a rubric with deep research: feed your AI a source of best practices.

a) For a college essay grader: I did a deep research with Perplexity to identify the best practices of great college essays and asked it to create a detailed rubric based on that, organized into a maximum of 6 categories.

b) For a cybersecurity content grader: I fed ChatGPT links of blog posts and assets that a cybersecurity leader rated highly. I then told AI to create a detailed rubric based on what these posts had in common (you can test it below).

2- Review and edit the rubric by prompting or editing it in a canvas (just tell ChatGPT to “open a canvas” or Claude an artifact). This review step is critical. This is what differentiates lazy AI users from those who actually use their judgment and taste to avoid AI slop.

3- Create a GPT (or Gem, or Claude Project) and add, in the knowledge section:

  1. that rubric (the context)

  2. your own writing style guide (use a process similar to 1b above)

  3. details about the target audience. Ideally, a full persona profile with their roles, challenges, goals, needs, constraints, etc.

  4. detailed instructions, i.e. “the prompt”. e.g.: “Based on the rubric, style guide and audience attached in this GPT’s knowledge”
    a) Grade the asset I will provide as input. Use a simple 1-5 scale (1 = Poor, 5 = Excellent). Be a tough grader. We need to create elite assets.
    b) Provide a thorough critique of the asset, with strengths and weaknesses, from most to least critical. Keep the target audience in mind.
    c) Recommend specific improvements and their rationales. That can be a different outline, some specific section rewrites, or other aspects to grade at least a 4 on the rubric. Keep my writing style in mind throughout.”

4- Test the GPT and iterate on the rubric and prompts first, until you get a lot closer, before iterating on the output.

5- Now the real work continues: edit the content based on what you deem most judicious.

Where to create a GPT in ChatGPT.

Want to try one? Here’s an example of an asset grader targeting Chief Information Security Officers (CISOs). You can input and grade this content from Push Security.

*That's me dictating with an AI tool. But it's my words. :-)

My selection of tips, news and workflows

đź’ˇ How to Get Distinctive and Opinionated AI Output

Nathaniel Whitmore shares five good prompting tricks to make our AI less average (Apple podcast - YouTube).

Combine that with what I shared above, and join the anti-slop crusade!

Summary:

Trick 1: Create Your Negative Style Guide

  1. Identify overused words, phrases, and formatting you want to avoid (em dashes, maybe?)

  2. Create your list of banned words (e.g., leverage, synergy, revolutionary, etc.)

  3. Apply this negative style guide to every prompt

Trick 2: Generate strong POVs, not “It depends” answers

  1. Force AI to pick ONE choice and argue vociferously for it

  2. Ask AI to "steel man" each option with the strongest arguments

  3. Then have AI commit to and defend a single choice

Trick 3: Cliche Burn Down

  1. Ask AI to list the 10 most common cliches in the content

  2. Have AI rewrite the output, avoiding identified patterns

Trick 4: Self-Critique Process

  1. Draft first version of content (essay, deck, presentation)

  2. Have AI red team it, then list the top 5 ways it's generic

  3. Rewrite V2 that fixes each issue

  4. Have AI explain why it changed what it changed

  5. Advanced: Switch models (GPT-5 thinking to o3) for cross-critique

Trick 5: Use Examples and Your Reasoning

  1. Provide an example of better-than-average output

  2. Explain WHY the example is better than conventional wisdom

  3. Articulate the principles that make it work

🛠️ More Advanced: Using the new Claude Skills

Want to do more advanced AI work (without coding)?

Claire Vo reviews what Claude Skills are and how to use them in her Claude Skills explained podcast.

AI-generated Summary*:  

Claude Skills provide a structured framework for creating reusable AI workflows that you can call on demand.

They solve the problem of repeatedly copying and pasting the same complex prompts by allowing you to define, save, and discover task-specific instructions.

What you can do with them

  • Define task-specific instructions, examples, and scripts for Claude to execute on your behalf

  • Automate repetitive tasks like analyzing data in a specific way, creating documents, or running a script

  • Bundle additional content and context, such as templates or examples, into a skill using relative file references

  • Build custom workflows, such as turning technical changelogs into a user-facing newsletter or drafting follow-up emails from demo notes

How to use them

  • A skill is a folder that must contain a central skill.md file for your main prompt

  • The skill.md file must start with the skill's name and description, followed by your detailed instructions

  • You can add other files to the folder, like templates, and reference them from your main skill.md file

  • To use your skills, zip the folder and upload it to the claude.ai web app

  • You can invoke a skill with natural language; Claude infers which skill to use based on your prompt and the skills it has available.

*generated using Gemini in Chrome directly from the YouTube page. Super timesaver.

Final Words

âťť

Because AI has been trained across the entire corpus of everything that humans have output, almost by definition it is optimized around average conventional wisdom.

NLW - on why we need to up our AI game

Thanks for sharing these highlights with busy marketing execs around you.🙏 

Someone forwarded you this email? You can subscribe here.

François | LinkedIn 

I'm a CMO, advisor, and "CMO Wingman". Yes, that's a thing :-). Ask my clients: in this AI era, CMOs need a strategic proactive advisor more than ever. I’m former CMO at Twilio, Augment Code, Apollo GraphQL, Decibel, Udacity and Head of Marketing for LinkedIn Talent Solutions.