8 Comments
User's avatar
Ryan Schultz's avatar

This is GOLD; thank you!

Anna Mills's avatar

This is amazing--thank you for sharing and making it so easy to copy these prompt moves...

Mike Caulfield's avatar

You absolutely could -- the issue for this is it is hard for me to test efficacy in the same way with open ended queries. I think they would be just as effective used in that way but my method of testing is to a) fully explore a difficult to explore claim (or a simple one in a muddled information environment), b) determine what evidence should be surfaced by a good investigation, must-haves, nice-to-haves, and what reasonable positions on the claim would be, c) run prompts and compare to a reference response.

Claims really stress test a process, and I'm used to developing stress tests/assessments of investigation methods around claims (just what I happen to have years experience in). So maybe I should just make that clear -- that my reputation in terms of testing is based on this claim-type use but I *think* with modification they could be generally useful?

Anna Mills's avatar

Thank you so much for explaining; that really helps me understand your approach and the difficulty of testing this kind of approach with open-ended queries. Google should take note... it seems like if there were an approach that worked with open-ended queries, they could build it in as an option to assist with verification instead of just the usual disclaimer about AI inaccuracy.

Mike Caulfield's avatar

One of my ultimate goals here is to get evidence based suggested follow ups into products. As I think you are pointing out, there's such a power in applying a "structure of the discourse" to any question. When you focus on "how is the dicourse structured on each side of this (evidence, concerns, sources, topical focus, social values warrants)" you can go so much deeper into things.

The thing we both know from our work is we think we are exploring an issue, but really we are exploring the *discourse* around the issue, and if you can surface that element so much more is possible.

Anna Mills's avatar

There might be some hope of getting this approach into products if the companies feel it makes them look better... and gives users better results. I would think Google would want at least an optional button after the AI overview for something like "help me check this."

Anna Mills's avatar

Then the part about the discourse would be built into the output... the user wouldn't have to understand that framework ahead of time... thanks for the dialogue!

Anna Mills's avatar

One thing that strikes me is that it would be amazing to apply this approach to situations where people are not fact-checking a particular claim but asking for a claim of fact and then need to verify it. Am I missing something about how this is meant to be applied?

I find myself using AI research assistance to get answers more often than I use it to investigate a particular claim...