It's an amazingly detailed prompt! I think developing some version of it as a standalone interface running an API might be an idea (though, I doubt Claude's API supports artifacts.) I feel like this is one of the more sophisticated examples of someone with content and context knowledge of their subject using an LLM like Claude to help establish a working prototype of a tool to do something good. You may want to submit it to Google Labs as an experiment or seek some actual funding from one of the main developers to publish it.
This is super promising and could save so much work, well done!
Wondering if and how it could be adapted to handle common science myths repeated by authoritative-sounding sources? e.g. newspapers often quote dermatologists saying incorrect information about sunscreen formulations because the actual experts in this area are formulation scientists, but journalists have been taught for years that dermatologists are the ultimate experts in all things skin-related. Not sure how well it could assess relevant expertise and weight it accordingly?
That's a great example - I probably couldn't work it into this one at that level of detail bc I'm approaching length limits, but it will do a some of that. To build it out to do even better there is a section on evidence in it that I'd want to expand. Search the prompt for "backing" if you want to play around with it, if not I'll experiment. Also try my earlier Well-Grounded project. https://chatgpt.com/g/g-679149b4f3348191935d05daa242e669-well-grounded
This is wonderful! I'm going to feature it in a presentation. Have you tried it with Gemini? I just got this result with Gemini 2.5 Pro (through a free account): https://g.co/gemini/share/48e7e4f3a580
Whoa - that's a really good result. I hadn't had much luck with the previous version of Gemini, but 2.5 seems worth a look, especially with the free for students announcement. The earlier versions of Gemini were making some fake links (maybe 5% of the time?) which was getting frustrating, but this is much better!
I actually put in the instructions into a custom Gem in Gemini. It was tricky because the system automates the model type--users cannot select it. This meant the Gem was defaulting to Deep Research. Oddly enough, all it took was simplifying the title. Not a bad result, but there could be room for more nuance: https://g.co/gemini/share/09de681e0275
Oh wow, that's great you were able to make a custom Gem. Are those shareable? I tried to make a custom GPT but the prompt was too long. PlayLab doesn't browse, so that wouldn't work.
It's an amazingly detailed prompt! I think developing some version of it as a standalone interface running an API might be an idea (though, I doubt Claude's API supports artifacts.) I feel like this is one of the more sophisticated examples of someone with content and context knowledge of their subject using an LLM like Claude to help establish a working prototype of a tool to do something good. You may want to submit it to Google Labs as an experiment or seek some actual funding from one of the main developers to publish it.
This is super promising and could save so much work, well done!
Wondering if and how it could be adapted to handle common science myths repeated by authoritative-sounding sources? e.g. newspapers often quote dermatologists saying incorrect information about sunscreen formulations because the actual experts in this area are formulation scientists, but journalists have been taught for years that dermatologists are the ultimate experts in all things skin-related. Not sure how well it could assess relevant expertise and weight it accordingly?
That's a great example - I probably couldn't work it into this one at that level of detail bc I'm approaching length limits, but it will do a some of that. To build it out to do even better there is a section on evidence in it that I'd want to expand. Search the prompt for "backing" if you want to play around with it, if not I'll experiment. Also try my earlier Well-Grounded project. https://chatgpt.com/g/g-679149b4f3348191935d05daa242e669-well-grounded
This is wonderful! I'm going to feature it in a presentation. Have you tried it with Gemini? I just got this result with Gemini 2.5 Pro (through a free account): https://g.co/gemini/share/48e7e4f3a580
Whoa - that's a really good result. I hadn't had much luck with the previous version of Gemini, but 2.5 seems worth a look, especially with the free for students announcement. The earlier versions of Gemini were making some fake links (maybe 5% of the time?) which was getting frustrating, but this is much better!
Yes, the first result with the default Gemini was pretty bad... I haven't explored the new Gemini enough but this is motivating me.
I wanted to give workshop participants a way to use your prompt if they didn't have paid Claude.
I actually put in the instructions into a custom Gem in Gemini. It was tricky because the system automates the model type--users cannot select it. This meant the Gem was defaulting to Deep Research. Oddly enough, all it took was simplifying the title. Not a bad result, but there could be room for more nuance: https://g.co/gemini/share/09de681e0275
Oh wow, that's great you were able to make a custom Gem. Are those shareable? I tried to make a custom GPT but the prompt was too long. PlayLab doesn't browse, so that wouldn't work.
Not yet. Gems cannot be released but it is part of Google's roadmap for Gemini.
Ah, good to know, thanks.
Excellent prompt! Thank you! I’m glad you included Toumlin there, its powerful