Surely I’m not just an LLM

:: LLM, AI

By: John Clements

I just read an Ars Technica article by Benji Edwards titled “The personhood trap: How AI fakes human personality” , with the subtitle “AI assistants don’t have fixed personalities—just patterns of output guided by humans.”

https://arstechnica.com/information-technology/2025/08/the-personhood-trap-how-ai-fakes-human-personality/

I have lots of things to say, but I’m just going to post this very brief reply:

In my opinion, in this article Benji Edwards falls into the classic “cartesian illusion” of many people who talk about humans; they believe that behind the mechanism that generates the words, there is some kind of “knower”, essentially a little person inside their heads. The sentence in this article that most clearly communicates this illusion in this article starts like this: “You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with …” The problem here is this: a human is just a system that generates plausible-sounding text based on patterns in training data! That’s all you are. You’re just like an AI.

“But… but … but … I have real experiences, based on the real world!” you say. When you dig down, I claim that there’s really no qualitative difference here. It’s true that your inputs are filtered in a particular way that means that you’re likely to have a more fixed personality than most current AIs, but the idea that there’s something essentially different about you because you have … what … a soul? … is a challenging one to support at fine-grained levels. I claim that I’m essentially following the reasoning of Daniel Dennett (RIP) here, who is no longer with us to be appalled at my misinterpretation of his positions.