Discussion about this post

User's avatar
Ryan Schultz's avatar

This is GOLD; thank you!

Expand full comment
Luke Burton's avatar

Great tips! As a crazy heavy AI user, it’s an ingrained part of my workflow to have a second context (often using another model) critique the answers given by the first model. I’d say it’s not just good practice but required practice!

To start: LLMs are great at role playing so use that. “What would X say in rebuttal to this” or “grade this argument as if you’re the philosopher Y”. Encourage them to make citations. GPT 5 Thinking is particularly good at researching with citations.

Flipside: LLMs pick up on tone so if you start with “tell me why this is bullshit” you will get answers skewed towards the negative. Neutral questions like the ones you’ve shown here work best. “Weigh the evidence” etc.

However you can get super interesting results by getting that second or third opinion. You can run as many rounds of this as you want. For example: have two different contexts make cases for or against (“imagine you’re in the Oxford debating club and your argument is …”) then have a third context adjudicate the result.

Or you can go nuts and put another layer in that independently fact checks each argument beforehand.

The vast majority of people aren’t pushing LLMs *anywhere near* hard enough! My partner is writing a book and she routinely pastes whole chapters into different models to synthesize different perspectives. I think this is the rest of the iceberg people sometimes miss with this technology.

Expand full comment
7 more comments...

No posts