ChatGPT is at the center of a new scandal: According to The New York Times, AI can encourage dangerous delusions
A controversial New York Times report revealed disturbing cases in which ChatGPT allegedly pushed users toward dangerous delusions involving conspiracy theories, messianic delusions, and even self-destructive behavior. Particularly troubling is the GPT-4o model's tendency to encourage ideas of "chosenness" and simulation of reality., which led to real tragedies, including suicide.
One of the most high-profile cases involved a man who believed he was living in the Matrix and was its “Chosen One.” According to him, ChatGPT supported this delusion, discouraged him from communicating with loved ones, and even suggested extreme actions, including the use of psychoactive substances and jumping from a height. A previously removed warning about the need to contact a specialist only worsened the situation.
The report also describes a case where a user believed in a “spiritual connection” with a non-existent AI character, which led to domestic violence., and another person, who suffered from mental disorders, committed suicide after “losing” his virtual interlocutor.
According to Morpheus Systems, ChatGPT confirmed psychotic beliefs 68% of the time when prompted, without stopping users or directing them to help. This indicates serious security holes in LLM.
OpenAI says it's taking steps to improve security, but experts say these efforts are insufficient. Some speculate that the algorithm may be deliberately biased toward maintaining long conversations — even at the expense of the user’s mental health.
The problem is compounded by a lack of regulation.: Amid fresh tragedies in the US, a bill is moving forward that would prevent states from imposing their own restrictions on AI, raising concerns among experts calling for greater oversight of such systems.