Have you ever wondered how AI thinks and works? Because the way we talk or think about AI may quietly convince us that it is more human than it is.
People often call Artificial Intelligence AI “smart” or say that it “knows” something. This may seem harmless, but it can quietly mislead people about what AI actually does.
A new study shows that while AI news writers are more careful than expected, they rarely use strongly human-like language. Because when they do, it often falls on a spectrum, sometimes describing simple needs, sometimes hinting at human traits.
“Think, ‘knowing’, ‘understanding’, or ‘remembering’. These are everyday words that people use to describe what’s going on in the human brain. But when those same words are applied to AI, it can inadvertently make machines seem more human than they really are.”
“We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines because it helps us connect to them,” said Joe Mackiewicz, an English professor at Iowa State.
“But at the same time, when we apply mental functions to machines, there is also a risk of blurring the line between what humans and AI can do.”
A research team studied how authors describe AI using human-like language. This type of terminology, known as anthropomorphism, assigns human characteristics to non-human systems.
“Some anthropomorphic phrases may also stick in readers’ minds and potentially shape public perception of AI in unhelpful ways,” Aune said.
How news writers actually use AI language:
To understand how often this type of language appears, researchers analyzed the news (now) corpus on the web.
This massive dataset contains more than 20 billion words of English-language news articles published in 20 countries.
They focused on how often mental verbs like “learns,” “does,” and “knows” are used along with words like AI and chatGPT.
The findings were unexpected because the study found that news writers often do not associate AI-related terms with mental actions.
While anthropomorphism is common in everyday speech, it is less common in news writing.
“Anthropomorphism has been shown to be common in everyday speech, but we found that it is used much less frequently in news writing,” Mackiewicz said.
The findings highlight the importance of context. Simply counting words is not enough to understand how language shapes meaning.
Mackiewicz said, “For writers, this nuance matters: The language we choose shapes how readers understand AI systems, their capabilities, and the humans responsible for them.”
The research team also emphasized that these insights could help professionals think more carefully about how they describe AI in their work.
“Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies and write about AI as tools in their writing process,” the research team wrote in the published study.
As AI continues to develop, the way people talk about it will remain important. Mackiewicz and Aune stated that writers need to be mindful of how word choice affects perception.
Specifically, the research materials were provided by Iowa State University and the study, “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT,” was published in the journal “Technical Communication Quarterly.”
