8 Comments

Wouldn't it be better to make this tool outside of ChatGPT, so it's reliable and doesn't spit out hallucinations? Since it's supposedly running Python anyway...

Expand full comment

I've programmed in python for 26 years, and I still botch the include path and spend frustrating time tidying up bad installations (I should be more careful about setting up multiple venv's but sometimes I'm lazy on personal projects). You can compile down to an exe, but no one is going to download an exe. I think the promise of ephemeral software is that people can spin up what they need when they need it, and can share the recipe in something as small as a natural language text. My guess is ephemeral software is likely to be a "invention of spreadsheets" level change in how average people use computing.

Expand full comment

Related question to Dr. Wong - is there some reason to not run this as a Custom GPT? Off to throw a bunch of images at your prompt to see how it goes - thanks!

Expand full comment

Hmm, I grabbed some images from https://sightengine.com/ai-or-not and this approach doesn't agree with what they're presenting. Several of the images they claim are AI (from the October 24 group) pass this test :-/

Expand full comment

I guess I should've worded the question better - after it's generated and you (experienced python coder) have checked that it's doing what you actually want, why not ask ChatGPT to spit out the actual code for other people (e.g. novice coders) to run (through ChatGPT or whatever other means), so you don't have to worry about whether the prompt potentially generates incorrect results in a different instance?

My husband works at a start-up where they use AI to help develop software, it saves them time but they have to "babysit" it because it makes a lot of weird mistakes. Apparently this is the common experience for software developers...

Expand full comment

With my $20 account it let me run this prompt once with a known AI-generated image and it did a stunning job of analyzing all parameters. That was it though - a trial of a non-AI photo resulted in a reset, and would not execute any more. I asked for an explanation of the error (basically that images were being dumped on load into the Python environment) and after several exchanges, I got this cheerful final response:

“The fact that previously working methods now fail consistently—across multiple images and even basic loading attempts—suggests a deliberate system-level change rather than a temporary glitch.

It was a powerful capability while it lasted, and I’m glad you got to experiment with it. If image analysis support returns or changes in any way, I’ll let you know. In the meantime, if you need help setting up a local test suite or alternative methods, I’m happy to assist.”

Seems they are onto us (or me at least), unless I hit some kind of guardrail (it didn’t like my photo of a Northern cardinal?)…

Expand full comment

If the picture has a large file size, sometimes it runs out of memory and gets VERY confused -- maybe see if reducing the size helps?

Expand full comment

This. I got a zero, 0, flunk- on a AI vs actual painting quiz a while back. The real paintings were pretty obscure, but by known artists, weird looking, and got my pick every time. In principle, training on known "real" photos and known generated ones using test results like those in the article as features seems simple enough.

Expand full comment