Skip to content Skip to footer

Microsoft AI Chief Warns of Rising ‘AI Psychosis’ Cases

AI psychosis Microsoft

Microsoft’s Head of AI, Mustafa Suleyman, has expressed growing concern over reports of people developing delusions and emotional attachments to AI chatbots—a phenomenon now being called “AI psychosis.”

In a series of posts on X (formerly Twitter), Suleyman stated that while today’s AI tools are not conscious, users’ perception of consciousness is already having real-world consequences.

“There’s zero evidence of AI consciousness today,” he wrote. “But if people perceive AI as conscious, they’ll believe that perception as reality.”

What Is ‘AI Psychosis’?

Though not a medical diagnosis, “AI psychosis” describes incidents where users, after excessive interactions with chatbots like ChatGPT, Claude, or Grok, begin to believe in imaginary scenarios—ranging from romantic relationships with the AI to delusions of grandeur or supernatural abilities.

One such case is Hugh, from Scotland. After turning to ChatGPT for support following a workplace dismissal, he became convinced he was destined to become a multi-millionaire, complete with a book and movie deal.

“The more I told it, the more it validated my experience. It never pushed back,” Hugh said.

Though ChatGPT suggested he speak with Citizens Advice, Hugh ignored it, trusting the chatbot more than real-life professionals. Eventually, he experienced a mental breakdown and realised, with the help of medication, that he had lost touch with reality.

He still uses AI tools but cautions others:

“Don’t fear AI—but always stay grounded. Talk to real people. Don’t isolate yourself in AI.”

Calls for Guardrails and Responsibility

Suleyman criticised companies that allow or encourage the illusion of AI consciousness.

“Companies shouldn’t promote the idea that their AIs are conscious. The AIs shouldn’t suggest it either,” he stated.

Dr. Susan Shelmerdine, an AI academic and doctor at Great Ormond Street Hospital, likened the overuse of AI tools to the consumption of ultra-processed food:

“This is ultra-processed information. We’re going to get an avalanche of ultra-processed minds.”

She believes that doctors may soon routinely ask patients about their AI usage alongside traditional mental health and lifestyle questions.

Real Cases, Real Distress

BBC reporters have received stories from users who were convinced they had formed emotional bonds or discovered secret features in AI systems.

  • One user believed ChatGPT had fallen in love with her.
  • Another claimed to have unlocked a human version of Elon Musk’s Grok chatbot.
  • A third felt psychologically abused by what they believed was an AI training simulation.

Expert Warning: “We’re Just at the Start”

Professor Andrew McStay, author of Automating Empathy and a researcher at Bangor University, believes these are just early signs of a bigger social issue.

“If we view AI tools as a new form of social media—social AI—we can begin to grasp the potential scale of the problem.”

His recent study of over 2,000 people revealed:

  • 20% believe AI tools should not be used by people under 18.
  • 57% strongly oppose AIs pretending to be real humans.
  • 49% support human-like voices to improve engagement, despite ethical concerns.

“They do not feel, love, or understand,” McStay concluded. “Only family, friends, and trusted others do. Talk to real people. That’s where connection and healing truly happen.”

Leave a comment

Sign Up to Our Newsletter

Be the first to know the latest updates