I first started writing about anthropomorphism and AI in September of 2022, prior to the release of ChatGPT and prior to my even knowing about “GPTs.” My first paper on the topic can be found here.
At the time I was an undergraduate student worried about how subtle anthropomorphism could be—that it needn’t be an embodied or spoken feature of a non-human but could be simply conveyed through mental imagery formed through the using and interpretation of intentional language.
My work on the topic concerned decision-making algorithms and centred on the premise that statements as simple as “an algorithm decides” launder (moral) agency to an algorithm in a way that obstructs a human’s capacity to make an informed decision. Anthropomorphism obstructs epistemic agency. My novel contribution here was that “agency laundering” can be entirely unintentional as the authors I was commenting on had questioned and that this is how so.1
I first encountered the term anthropomorphism reading material on evolutionary psychology. I read something vague about how humans have the tendency to anthropomorphize just about anything, even clouds or dots on a page. Shortly after reading this I read a paper on the opacity of decision making algorithms and remarked to my undergraduate thesis supervisor that had the authors not anthropomorphized the AI in their writing they might have been able to say something actually useful about what qualifies an explanation about algorithmic decision making “transparent enough” for the individuals implicated by algorithmic decisions.
By the time I had a full draft of the paper linked above, that started as that small insight about the unfortunate use of figurative language, ChatGPT was released.
Although I did not know what this technology was, I had recently taken a philosophy course titled “Minds, Brains, and Machines” and knew enough to recognize two things. First, that the language capabilities of ChatGPT were a breakthrough moment in AI. Second, that such capabilities were going to be a big problem. I had just spent the last few months arguing that simple statements in the everyday intentional language of folk psychology was a bad thing in the context of automated AI decisions and now there was some new ‘algorithm thing’ outputting the same kind of statements in first person language!
Today my research concerns not moral agency but epistemic agency which nonetheless implicates moral agency in important ways. My claim is that the epistemic agency of users of anthropomorphized AI systems is at stake, that the problem is anthropomorphism and that the solution is to re-capture or explicitly iterate “human centric” design and the growing movement of human centric AI (HCAI) as a human-computer-interaction (HCI) problem implicating the design of user interfaces.
Thus, the solution is not to throw around claims about how to make sense of AI systems that are like us in important and problematic ways for moral and psychological theorizing, but to leverage the non-human aspects of AI to foster “synergistic” HCI.
“Synergistic interaction” is a term that has captured my attention recently. I’ve seen it in literature in domains such as physics, neuroscience, HCI, and epistemology. It sometimes slides by with different names but the idea is nonetheless present in the “collective consciousness” of AI researchers. Let’s give it a permanent place here.
Synergistic interaction is an interaction between two or more cooperating agents that results in a synergistic contribution. Broadly, a synergistic contribution is always greater than the individual contributions of either agent had they not cooperated. I borrow this definition from Wollstadt and Kruger (2022). Wollstadt and Kruger (2022) reframe HCI as tailored towards cooperation instead of automation. Provisionally, this predicts the claim that the inclusion of AI into scientific research is not to automate parts of the research pipeline but to facilitate or scaffold cooperation between agents (human agents and artificial agents) such that AI can be characterized as cooperating with humans on research related tasks but not replacing them per se. Of course, notions of synergistic interaction like this one are quite commonplace in HCI literature. Versions of the concept go back decades (see: Tsvetkova et al., 2018; Jennings et al., 2014; Salomon et al., 1991). All the above motivates a research program: design machines and interfaces that enable synergistic interactions.
OpenAI, Anthropic, and the like have not caught on to the emerging epoch of synergistic interaction as myself and others have. OpenAI, however, has at least acknowledged the dangers of anthropomorphism (only two years late) and has taken to study how lasting the effects of anthropomorphism are on users’ free thinking and access to truthful information—the stuff of epistemology.
But studies on anthropomorphism ought to be more longitudinal than the one conducted by OpenAI to back the claims in their system card. To grasp the consequences of various design considerations like the alluring voices of ChatGPT4o, there are categories of epistemic agents that should be studied: those that tend to defer to others, those that are more persuaded by friends, those whom are vulnerable or impressionable like children and teens, those that place great epistemic authority into the “internet,” that can’t identify fake news, neglect to, or fail to fact check. Perhaps even those that have never studied critical thinking as it is grossly missing from education curriculums at least across Canada and the United States.2
Moreover, OpenAI chose to study the hypothesis that ChatGPT might alter the deeply ingrained beliefs of users in a short period of time. Pretty bold but not very telling and neglectful of the effects of “nudging” to explain how information might be slowly integrated into a users’ belief system over longer periods of time.
As users establish a history with a conversational AI (and the AI with them as it learns from each interaction) the human whether interacting with the AI as a friendly interlocutor or a trusted information source lets their “epistemic guards” down, they implicitly allow information to be integrated into their knowledge repository with an unquestioning attitude3, just as humans do with other humans. This is not just because we are human, however, it is because we interact with the conversational AI as if it were human.
With Prof. Daniel Dennett’s philosophy now immediately available to me and perhaps to you, the reader, by doing so we effectively apply the intentional stance to the AI and design considerations that give us no choice but to do so should be strictly outlawed!4
Anthropomorphism is an intentional design choice that obstructs the epistemic agency of users of AI systems. Even if not strictly outlawed, AI developers should not just be careful about the extent to which they anthropomorphize their model for the sake of “user experience” but their doing so should be regulated by governments around the world.
This claim is not new. In 2011, five principles of robotics were developed by a research group from the United Kingdom’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC; see: Boden et al., 2017 and Bryson 2017). Today’s leading conversational AI technologies explicitly violate rule four in Boden et al., 2017, (p.4):
Rule 4 (a): “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent.”
Rule 4 (b): “Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.”
Insofar as rule 4(a) articulates that the machine nature of robots ought to be transparent, with respect to emerging AI technologies I argue that their machine nature should be emphasized in their design. That is, if we design systems that augment us by complementing our cognitive capacities (not mimicking them) synergistic interactions will abound in both everyday epistemic projects, scaffolding the learning of the public, and will enable scientific breakthroughs with AI—emphasis should be placed on the cooperative nature of the task and should negate the fecundity of fully automated science. Let me explain.
Research in epistemology shows that sociocultural and cognitive diversity improves team performance on projects (Fazelpour & De-Artega, 2022). Extending this to epistemic projects, for example, projects aimed at knowledge building or justification, diversity in groups leads to better knowledge and justifications than in non-diverse groups. These are some of the most fundamental claims of value-laden science and feminist standpoint epistemology.
By leveraging the machine nature of AI—a technology that is actively present in our epistemic communities in an agentic way—we can make the most of this technology by making it more unlike us.
Put differently, at present we seem to engage or at least talk about a tendency to engage with conversational AI as we would any person we might choose to talk to or collaborate with on a project. We have taken this collaboration frame so far that there is a possible future in which AI systems replace human scientists. This counters the progress made by research on diversity in science and epistemology. If only AI does research, science loses diversity and epistemic progress is limited. If autonomous AI systems are added into increasingly diverse human communities and the diversity of AI increases as well, progress will be much more fruitful. This is contingent on additional qualities of epistemic agency that would grant an AI membership in an epistemic community as more than just a scientific instrument.5
For this to be a reality we must be critical of anthropomorphic design. For as long as AI is framed as an information technology, a means of communication, or a tool for thinking, for as long as conversational AI systems impede our free thinking and access to truthful information, anthropomorphic design is the wrong design.
Anthropomorphism is neither human centric for a myriad of reasons beyond this article nor conducive to the kinds of user experiences large corporations should be motivated to foster. Neither should these experiences be enabled by proprietary models. If we humans are to do this ‘AI thing’ proper and to do it in such a way that is conducive to synergism, such technologies must also be open-access. The benefits of diversity will only truly abound if all can partake.6
I am always keen to connect and establish collaborative partnerships! Please considering interacting, for all you know it might be synergistic!
Rubel, A., Castro, C., & Pham, A. (2019). Agency Laundering and Information Technologies. Ethical Theory and Moral Practice, 22(4), 1017–1041. https://doi.org/10.1007/s10677-019-10030-w
For more examples and a more detailed overview of OpenAI’s recent system card see: Maynard, 2024.
Nguyen, C. T. (2022). Trust as an Unquestioning Attitude. In T. S. Gendler, J. Hawthorne, & J. Chung (Eds.), Oxford Studies in Epistemology Volume 7 (p. 0). Oxford University Press. https://doi.org/10.1093/oso/9780192868978.003.0007
Dennett, D. C. (2023, May 16). The Problem With Counterfeit People. The Atlantic. https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/
My research on this topic concerns understanding the epistemic contributions of AI systems. Especially those that are perceivably agentic and autonomous. It is important to recognize that some AI systems are more than just mere perceptual instruments like microscopes or clocks but are tools for reasoning which might grant them place as a subclass of ‘epistemic agent’ in our epistemic communities but that this comes with the difficulty of grasping to what extent such systems need to be intentional, minded, or even conscious to responsibly participate in this domain insofar as many of the likely kinds of AI are black boxes.
There is more on this topic in open science literature of which I am not an expert (see: Spirling, 2023 for an introduction). It is imperative to recognize that non-anthropomorphic design and open-access only takes us so far and cannot truly happen without properly addressing structural injustices.