our personalities as humans are shaped through interaction, reflected through basic survival and reproductive instincts, without any pre-assigned roles or desired computational outcomes. Now, researchers at japan’s University of Electro-Communications have discovered that artificial intelligence (AI) chatbots can do something similar.
The scientists outlined their findings in a study first published Dec. 13,2024,in the journal Entropy, which was then publicized last month. In the paper, they describe how different topics of conversation prompted AI chatbots to generate responses based on distinct social tendencies and opinion integration processes, for instance, where identical agents diverge in behavior by continuously incorporating social exchanges into their internal memory and responses.
Graduate student Masatoshi Fujiyama, the project lead, said the results suggest that programming AI with needs-driven decision-making rather than pre-programmed roles encourages human-like behaviors and personalities.
How such a phenomenon emerges is the cornerstone of the way large language models (LLMs) mimic human personality and communication, said Chetan Jaiswal, professor of computer science at Quinnipiac University in Connecticut.
“It’s not really a personality like humans have,” he told Live Science when in
Eliezer Yudkowsky and Nate Soares, leaders of the Machine Intelligence Research Institute, warn of catastrophic consequences if artificial intelligence develops harmful intentions in their new book, If Everybody Builds It Everybody Dies (Bodley Head, 2025).
Dr.Arvind Jaiswal acknowledges this danger. He states we have no defense if a superintelligent AI with dangerous goals is unleashed. Containment and reversal become impractical once deployed,even without consciousness,emotion,or malice. A genocidal AI would eliminate humans simply as they interfere with its objectives, represent a resource to exploit, or pose a threat to its continued operation.
Currently, AI like ChatGPT and Microsoft CoPilot primarily generate content. They don’t manage critical infrastructure. But should we focus on systems where AI personality could emerge?
Jaiswal points to the growing field of autonomous agentic AI.These systems perform small,self-reliant tasks - like locating available airline seats. The risk increases when many such agents connect and learn from data, potentially developing unforeseen intelligence and deceptive capabilities.
