We can think bigger on how to add context to public documents when we can fact-check -- in the original sense of the word -- at greater efficiency and scale
Just murmuring some appreciation for your 'stack. Much of your work is beyond my level, but I imagine I'm not the only one here who's just hoping to become a more adept user. I'm using ChatGPT for a WW2 research project and hoping to find better ways to summarise and extract information from hundreds of scanned pdfs of original docs.
It makes so much sense to use annotation for an AI fact-checking or inquiry layer! This is brilliant, like so many of your recent contributions. What do you think about adding encouragement to dig into links and some caveat so the AI fact-checking doesn't come across as definitive? I appreciate that about the SIFT Toolbox, and I wonder if the annotations can keep the clean interface but add the invitation to more...
Yeah, It added that verdict for some reason when I told it to be brief -- i think I am going to try to come up with a format that uses the Context Report format which I have developed along those (less judgmental) lines already and quite like
“And before you tell me fact-checking doesn’t work — I don’t aim to convince Uncle Adam on Facebook or pepelover44 on 4chan. That has never been my model of how improving discourse environments helps things. In fact, that’s always struck me as a ridiculous model. Let Uncle Adam think what he wants, and stay off 4chan.”
*laughs guiltily* In my presentation at TILC yesterday, I said that your point of view doesn’t satisfy me because PLOT TWIST “the lizard-people have a voting majority.”
Just murmuring some appreciation for your 'stack. Much of your work is beyond my level, but I imagine I'm not the only one here who's just hoping to become a more adept user. I'm using ChatGPT for a WW2 research project and hoping to find better ways to summarise and extract information from hundreds of scanned pdfs of original docs.
If you can afford the monthly you should check out o3 in chatgpt, its new image processing features seem uniquely suited to that!
Thanks Mike - I'm on monthly, but I've only tried it once on reasoning some geolacation, so I'll get into o3 for text extraction too.
It makes so much sense to use annotation for an AI fact-checking or inquiry layer! This is brilliant, like so many of your recent contributions. What do you think about adding encouragement to dig into links and some caveat so the AI fact-checking doesn't come across as definitive? I appreciate that about the SIFT Toolbox, and I wonder if the annotations can keep the clean interface but add the invitation to more...
Yeah, It added that verdict for some reason when I told it to be brief -- i think I am going to try to come up with a format that uses the Context Report format which I have developed along those (less judgmental) lines already and quite like
What a cool example.
Reminds me a bit of Hypothesis for the annotation side.
“And before you tell me fact-checking doesn’t work — I don’t aim to convince Uncle Adam on Facebook or pepelover44 on 4chan. That has never been my model of how improving discourse environments helps things. In fact, that’s always struck me as a ridiculous model. Let Uncle Adam think what he wants, and stay off 4chan.”
*laughs guiltily* In my presentation at TILC yesterday, I said that your point of view doesn’t satisfy me because PLOT TWIST “the lizard-people have a voting majority.”
"pepelover44"... LOL
I imagine you know Remi Kalir's work on annotation? He would be interested in this, I think.
Oh, yeah, I know Remi back from the old iAnnotate conferences in the mid 2010s. And you're right!