Hopefully you're on to something. I'm graduating with my master's in library information studies degree in May and hoping to get a late-start career as an academic librarian where one of the main job responsibilities would be to teach students information literacy. I earned my undergraduate degree in international studies right when the Great Financial Crisis hit and over the past year-and-a-half of grad school I keep having panic flashbacks of graduating to a decimated job market except this time it's because all of the administrators have decided that AI can replace us. If you're right, it should make sense to argue that academic librarianship is more important than ever, though, I suppose there's still the very real chance of another good old-fashioned market crash to cause me anxiety.
I’m going in circles thinking about this. Maybe you can help.
I understand the idea that surpluses consume. As you point out, this can lead to scarcity, e.g. of attention.
But I can’t quite figure out how you are predicting or assigning value from this phenomenon.
Is a thing valuable because the surpluses consume it? I.e. a thing is valuable because it is made scarce?
I don’t think that’s true. While some things are made more valuable because of scarcity, that is primarily because they held value to begin with. There are lots of things that are scarce but aren’t necessarily valuable. Rare diseases, antisocial behavior, useless products, etc.
It sounds like you’re saying, more particularly, that some things which do hold value, like the Toulmin model, may not be broadly or socially /valued/ because they are inaccessible, unwieldy, or too costly. If the scalable nature of AI can overcome inaccessibility, unwieldiness, or cost, this may increase their (at least social) value.
From here, I don’t know how to tie this to what’s valuable to teach or learn or do in education.
Perhaps the argument is that some things of value that were undervalued (e.g. Toulim model) due to “costliness” should still be taught in school because AI will proliferate these things in the future and… people will need to evaluate them in order to accept and best utilize them?
Anyway, if you can help me understand the how, why, and to whom of “value” in education, I’d love to hear more.
So the starting point is this -- the rhetoric we often hear about AI and education is a rhetoric of subtraction. If AI can do thing X, the reasoning goes, then thing X gets crossed off the syllabus.
By that logic, if AI can produce reasoning, there is less need for us to teach it. But I don't think that's true.
A good parallel might be map creation. Like AI, maps are a cultural technology, a way of representing what is known in a useful way. When maps first were produced and refined and published widely -- when map-making became a massified profession -- one way of thinking might have been well, people will have to know less geography now, because we have experts doing that. People will have to know less cartography.
That's... not what happened at all. A surplus of maps produces a deficit of map-readers, and so suddenly you had a whole set of skills that really upleveled that were actually pretty close to the skills of map-making (though not identical). I'm not an expert on this, so let me know if I've got this wrong, but my understanding is that well, now you've got a map, you have a guy with a bunch of equipment on your boat trying to position you on that map through star-readings and the like. This becomes an incredibly in-demand skill. So to see that coming you don't ask "what do maps take away" you ask "If suddenly every ship has high quality maps on it, what skills do you need on the ship?"
The weird thing to me about both maps and AI is that the techniques of *reading* a map or an AI result are not that far off from the techniques that AI is supposedly replacing. The guy reading the map is actually doing a bunch of map-makingish stuff. The profusion of maps maybe reduces the need to make one's own maps, but more people than ever are using the *skills* of map-making.
Likewise, my point about AI is it produces chains of reasoning that have to be evaluated by a human. That actually argues for teaching more humans to better understand complex chains of reasoning, not less. This runs counter to the broad assumption in the AI debate which I would say goes something like "If AI can truly reason then people will have to reason less" and then people go after one another about how AI will replace human reasoning and that's either good (economic surplus) or bad (erodes humanity). I think that's very much the wrong frame.
That helps. So, at risk of oversimplifying, my takeaway is:
To use AI optimally, one should be trained in the/certain underlying knowledge or skills that AI is capable of replicating.
(Here "use AI optimally" refers to evaluating the output of AI or simply using AI effectively.)
I suspect this is going to be hard to prove in the short-term, but it is still a good argument for why we should beware of abandoning teaching skills or concepts just because AI is good at them.
This seems similar to how, especially during the early days of Google and Wikipedia, educators have had to counter arguments such as, "Students no longer need to learn facts because all facts are available at their fingertips." Research on creativity and critical thinking suggests that is simply wrong: Broad / deep knowledge and understanding are tied to better critical thinking and creativity.
And I do know of at least one recent study that seems linked to your idea, as it found: "a lack of domain knowledge in nonexpert users may limit the effectiveness of generative chatbots in supporting higher level cognition and agency". https://psycnet.apa.org/fulltext/2025-57961-001.html
That article is a great example. And one of my hobby horses here is that we should be using AI in education to model reasoning not produce outputs, because engaging with modeling can provide learning transfer where as polished outputs are more likely to simply be cognitive offloading.
I like the main post and I also like this comment — which in one sense is trying to apply Toulminesque perspicacity to evaluating your claims for sound reasoning; and is also raising the important question of how to analogize from your example to other domains. I am a songwriter and teach songwriting at Berklee, and I can attest that these issues are front of mind for those in the AI/music crosshairs. And I agree that trying to find new "human hills to die on" is proving an exhausting cross country trek right now. Your piece suggests a different approach. As I recently wrote to some young students:
"Some people like to say: "You won't lose your job to AI; you'll lose your job to someone who's learned to use AI." I'd turn this around a bit. Suppose you're in a world where everyone, with the
click of a button, can produce a perfectly in tune simulacrum of a past pop star singing a
generated hit song. What would give you a competitive edge in such a world? Being someone
who is a little more clever about how to word your AI prompts? Or someone who knows about
singing, playing, composing, songwriting? Who can discern, select, critique, revise, and tweak?"
One problem with this optimistic slant is that the "consume" in your Simon quote can mean not just scarcity but destruction: deskilling, the degradation of human capacity. That's what most concerns me. I can watch a skilled songwriter who learned the craft by traditional
means figure out creative ways to enlist AI in their process. What I don't know - as a creator or educator - is what the skillset will be for a young musician who grows up in an AI saturated environment to begin with. Model collapse can presage a kind of human collapse. That's what keeps me up at night.
I find this fascinating. Thanks for sharing your thoughts. Are you saying that if we asked whatever AI we are using, ChatGPT, Perplexity or something else, to use the Toulmin method we may get better results and understanding? I do hope I have understood this properly but might give it a try anyway and see what happens... always good to explore ideas.
Hopefully you're on to something. I'm graduating with my master's in library information studies degree in May and hoping to get a late-start career as an academic librarian where one of the main job responsibilities would be to teach students information literacy. I earned my undergraduate degree in international studies right when the Great Financial Crisis hit and over the past year-and-a-half of grad school I keep having panic flashbacks of graduating to a decimated job market except this time it's because all of the administrators have decided that AI can replace us. If you're right, it should make sense to argue that academic librarianship is more important than ever, though, I suppose there's still the very real chance of another good old-fashioned market crash to cause me anxiety.
Librarians are gold, Inside! Now more than ever!
I’m going in circles thinking about this. Maybe you can help.
I understand the idea that surpluses consume. As you point out, this can lead to scarcity, e.g. of attention.
But I can’t quite figure out how you are predicting or assigning value from this phenomenon.
Is a thing valuable because the surpluses consume it? I.e. a thing is valuable because it is made scarce?
I don’t think that’s true. While some things are made more valuable because of scarcity, that is primarily because they held value to begin with. There are lots of things that are scarce but aren’t necessarily valuable. Rare diseases, antisocial behavior, useless products, etc.
It sounds like you’re saying, more particularly, that some things which do hold value, like the Toulmin model, may not be broadly or socially /valued/ because they are inaccessible, unwieldy, or too costly. If the scalable nature of AI can overcome inaccessibility, unwieldiness, or cost, this may increase their (at least social) value.
From here, I don’t know how to tie this to what’s valuable to teach or learn or do in education.
Perhaps the argument is that some things of value that were undervalued (e.g. Toulim model) due to “costliness” should still be taught in school because AI will proliferate these things in the future and… people will need to evaluate them in order to accept and best utilize them?
Anyway, if you can help me understand the how, why, and to whom of “value” in education, I’d love to hear more.
Sorry it's taken me some time to get to this.
So the starting point is this -- the rhetoric we often hear about AI and education is a rhetoric of subtraction. If AI can do thing X, the reasoning goes, then thing X gets crossed off the syllabus.
By that logic, if AI can produce reasoning, there is less need for us to teach it. But I don't think that's true.
A good parallel might be map creation. Like AI, maps are a cultural technology, a way of representing what is known in a useful way. When maps first were produced and refined and published widely -- when map-making became a massified profession -- one way of thinking might have been well, people will have to know less geography now, because we have experts doing that. People will have to know less cartography.
That's... not what happened at all. A surplus of maps produces a deficit of map-readers, and so suddenly you had a whole set of skills that really upleveled that were actually pretty close to the skills of map-making (though not identical). I'm not an expert on this, so let me know if I've got this wrong, but my understanding is that well, now you've got a map, you have a guy with a bunch of equipment on your boat trying to position you on that map through star-readings and the like. This becomes an incredibly in-demand skill. So to see that coming you don't ask "what do maps take away" you ask "If suddenly every ship has high quality maps on it, what skills do you need on the ship?"
The weird thing to me about both maps and AI is that the techniques of *reading* a map or an AI result are not that far off from the techniques that AI is supposedly replacing. The guy reading the map is actually doing a bunch of map-makingish stuff. The profusion of maps maybe reduces the need to make one's own maps, but more people than ever are using the *skills* of map-making.
Likewise, my point about AI is it produces chains of reasoning that have to be evaluated by a human. That actually argues for teaching more humans to better understand complex chains of reasoning, not less. This runs counter to the broad assumption in the AI debate which I would say goes something like "If AI can truly reason then people will have to reason less" and then people go after one another about how AI will replace human reasoning and that's either good (economic surplus) or bad (erodes humanity). I think that's very much the wrong frame.
That helps. So, at risk of oversimplifying, my takeaway is:
To use AI optimally, one should be trained in the/certain underlying knowledge or skills that AI is capable of replicating.
(Here "use AI optimally" refers to evaluating the output of AI or simply using AI effectively.)
I suspect this is going to be hard to prove in the short-term, but it is still a good argument for why we should beware of abandoning teaching skills or concepts just because AI is good at them.
This seems similar to how, especially during the early days of Google and Wikipedia, educators have had to counter arguments such as, "Students no longer need to learn facts because all facts are available at their fingertips." Research on creativity and critical thinking suggests that is simply wrong: Broad / deep knowledge and understanding are tied to better critical thinking and creativity.
And I do know of at least one recent study that seems linked to your idea, as it found: "a lack of domain knowledge in nonexpert users may limit the effectiveness of generative chatbots in supporting higher level cognition and agency". https://psycnet.apa.org/fulltext/2025-57961-001.html
That article is a great example. And one of my hobby horses here is that we should be using AI in education to model reasoning not produce outputs, because engaging with modeling can provide learning transfer where as polished outputs are more likely to simply be cognitive offloading.
I like the main post and I also like this comment — which in one sense is trying to apply Toulminesque perspicacity to evaluating your claims for sound reasoning; and is also raising the important question of how to analogize from your example to other domains. I am a songwriter and teach songwriting at Berklee, and I can attest that these issues are front of mind for those in the AI/music crosshairs. And I agree that trying to find new "human hills to die on" is proving an exhausting cross country trek right now. Your piece suggests a different approach. As I recently wrote to some young students:
"Some people like to say: "You won't lose your job to AI; you'll lose your job to someone who's learned to use AI." I'd turn this around a bit. Suppose you're in a world where everyone, with the
click of a button, can produce a perfectly in tune simulacrum of a past pop star singing a
generated hit song. What would give you a competitive edge in such a world? Being someone
who is a little more clever about how to word your AI prompts? Or someone who knows about
singing, playing, composing, songwriting? Who can discern, select, critique, revise, and tweak?"
One problem with this optimistic slant is that the "consume" in your Simon quote can mean not just scarcity but destruction: deskilling, the degradation of human capacity. That's what most concerns me. I can watch a skilled songwriter who learned the craft by traditional
means figure out creative ways to enlist AI in their process. What I don't know - as a creator or educator - is what the skillset will be for a young musician who grows up in an AI saturated environment to begin with. Model collapse can presage a kind of human collapse. That's what keeps me up at night.
I find this fascinating. Thanks for sharing your thoughts. Are you saying that if we asked whatever AI we are using, ChatGPT, Perplexity or something else, to use the Toulmin method we may get better results and understanding? I do hope I have understood this properly but might give it a try anyway and see what happens... always good to explore ideas.
Interesting.