Nathan Drescher/Android Authority
TL;DR
- ChatGPT has been making unusually frequent mention of ghosts for some time now.
- It appears that these mentions were caused by the training quirk of ChatGPT’s retired “nerd” personality type.
- A specific directive in GPT-5.5 should prohibit inappropriate ghost references.
Earlier this week, a Post to the ChatGPT subreddit The system prompt for the new GPT-5.5 model pointed to an eyebrow-raising directive: an explicit ban on mentioning goblins, gremlins, and trolls, among other things, unless it was completely relevant to the question. OpenAI has addressed its recent models’ attraction towards the creatures, and it turns out it’s mostly due to the chatbot’s former nerd-style personality mode.
ChatGPT lets users choose from several pre-selected style and tone combinations, which OpenAI calls personalities. There are options for letting the bot imbue multiple personalities – professional, skilled, quirky – in its responses, including, at one time, one OpenAI wanted to be “unexpectedly stupid”. In a blog postThe company says that although that nerd setting only applied to one in 40 ChatGPT responses when it was available, it actually liked talking about mythical creatures: Two-thirds of all uses of the word “goblin” came from nerd-style conversations, which OpenAI shut down last month.
Don’t want to miss the best of Android Authority?


The term chatgpt is mentioned Evil spirit Apparently there was about a 40-fold increase between GPT-5.2 and GPT-5.4. OpenAI says that in creating its noir ideal, its engineers “unintentionally gave particularly high rewards to metaphors with creatures”, which led to that personality style referencing not only ghosts, but also demons, trolls, and gremlins more often than you might expect.
But because OpenAI had already started training GPT-5.5 before it could figure out why ChatGPT was talking about ghosts so much, the behavior continued into testing, including excessive use of other “tick words.” A type of animal And Pigeon. Finally, the latest model ended with specific instructions to avoid using these words unless absolutely necessary.
It’s troubling to learn that widely distributed AI models can develop widespread behavioral quirks that confuse even the engineers working on them, but at least this particular one was relatively harmless.
If you’re a ChatGPT user, have you noticed that it’s talking about ghosts more than expected? If yes, is it closed? Tell us about your experience in the comments.
Thank you for being a part of our community. Please read our comment policy before posting.
