Today I gave a practice talk for an up coming conference presentation (spoiler alert, it didn’t go very well). I gave this talk to a room full of my peers. The topic of my presentation is something I have explored on this blog before—the possibility of artificial epistemic agents. An epistemic agent just is an agent that pursues a goal and that goal is knowledge creation, generation, etc… Humans are paradigmatically epistemic agents. The fact that I am writing this now closes the case.
My talk raised the question of whether AI (and I mean mostly LLMs) could be understood as an epistemic agent and I argued that AI is actually best understood as such.1 This was not well received—which I expected. I told a room of philosophers that that annoying thing their students are using to write mediocre essays is a strong candidate for being an epistemic agent.
And I get it. All the AI talk these days is annoying, its everywhere, everyone is an AI expert, everyone must give their two-cents on the matter. This, however, is important. People should talk about AI—AI is inherently disruptive (and transformative). AI (and the people that design, develop, deploy, and consume it) are causing harm to people, to the planet, intentionally and unintentionally and for what? To re-phrase an email? To fix bugs in a line code? To replace a human person with machine?
So I get it. Why should anyone feel inclined to call an AI an epistemic agent, let alone think the act of doing so is productive?
My argument was that this concept of “epistemic agent” is a strategic concept. It provides us with a check list of the do’s and don’ts of AI design, development, and deployment. This move has already been played by Ramón Alvarado. Alvarado (2023) calls AI an epistemic technology. An epistemic technology is one that is:
Designed, developed and deployed for use in epistemic contexts
Designed, developed and deployed to manipulate epistemic content and
Designed, developed and deployed to carry out epistemic operations on such content.
In my opinion, ChatGPT does not meet this criteria. ChatGPT is deployed in epistemic contexts (e.g., essay writing). ChatGPT is deployed to manipulate epistemic content (e.g., data cleaning). And ChatGPT carries out epistemic operations on such content (e.g., tokenization….probably counts). But was ChatGPT intentionally designed to do these things? I am doubtful. ChatGPT was designed to approximate a human, to invite them in, to be a worker bee, not an epistemic agent.
These are not epistemic things. Deception and increasing user interaction time is not conducive to productive knowledge building. An AI that you can become romantically attached to is not an epistemic technology. It’s flawed. It’s bad. We need to course correct and we need big tech to want to.
This is because that combination of stuff: LLMs, GPTs, GANs, RAGs, and so on—these innovative artefacts of human minds have the potential to be epistemic technologies and with the right kind of autonomy and functions epistemic agents. This years noble prizes in chemistry and physics are examples of what is possible when you epistemically align a technology.
Unfortunately, epistemically aligned technology is not where we are going. Geoffrey Hinton has since been on a mission to warn about the dangers of AI. These are dangers like bad actors and for loosing control of AI—the question of what happens when/if [non-epistemically aligned] AI takes over which Hinton thinks is a real possibility. This is why regulation matters. Regulation seems impossible in today’s climate. The problem is worsened by the for profit companies lobbying for less regulation. I don’t have the answers to these questions.
However, amongst all this bad there is a glimmer of good. We have machines that can do a lot for humanity—for the world—but only if we use them in the right way. I think the right way to use them is to achieve epistemic ends, for instance, new knowledge in medicine and climate science. We should want to build systems that do the smart things we cannot do alone.
Not to mention, I think that’s what the whole damn project was from the get go. Was that not what Newell and Simon were trying to do when they made the Logic Theorist? When Leibniz wanted to build his calculating machine? The original project has gotten lost in the weeds of a “magical” talking machine.
So, if it is still possible, how do we course correct? Maybe by way of a nifty concept “epistemic agent.” That’s my two-cents, one for epistemic the other for agent. Perhaps, I’ve been reading too much Dan Dennett lately, but I think we can get pretty far with good concepts, especially when the concepts themselves can evolve. Epistemic technology is one such concept, but this concept applies to much more than just AI. So, if we need one for just AI, maybe epistemic agent will do the trick (because “AI” certainly is not). Maybe it won’t but I can’t know that if I don’t ask the question right?
This was the message I was trying to communicate today (along with some more technical what nots). I don’t think I did the best job at that, but I also was not met with open minds. Instead, I was met with philosophers (who rightly) had their guards up: AI is not an agent, it is not thinking, it is not intelligent. It is a bad tool, we need to get rid of it. We should have never had it in the first place. I am sure this is what ran through their heads.
There is a meteor about to crash into the earth. But negating reality is not going to stop it. Maybe making concepts won’t help either, except that I think that it can. The way we understand the world structures the actions we take in it. The way we conceptualize a disease or pandemic informs research programs and interventions. That is exactly the kind of work we need to be doing. That is the work I am trying to do.
I sure hope the crowd at the conference is more receptive!
This is not a hill I am willing to die on, and I am not sure I will keep the concept epistemic agent—but the project is still worthwhile.