Quote:
A new study found that ChatGPT responds to mindfulness-based strategies, which changes how it interacts with users. The chatbot can experience “anxiety” when it is given disturbing information, which increases the likelihood of it responding with bias, according to the study authors. The results of this research could be used to inform how AI can be used in mental health interventions.
|
https://fortune.com/2025/03/09/opena...-intervention/
So if it starts to hallucinate on you, just couch your bot through some guided meditations to get it back on track. Not really the future I was looking forward to.