Public trust in so-called “chatbot therapists” shifted rapidly during the rise of generative artificial intelligence in 2023, according to new research from Curtin University, prompting the redevelopment of a safer, next-generation wellbeing chatbot known as Monti.
The study found that attitudes toward AI-supported mental health tools changed significantly before and after the public release of ChatGPT, with users increasingly favouring generative AI chatbots for their more natural conversational style and perceived understanding. Researchers said this shift may have caused earlier rules-based chatbots to appear repetitive and less responsive by comparison.
The findings are now shaping the redevelopment of Curtin’s wellbeing chatbot, Monti, which is being co-designed with consumers to support safe, reflective emotional exploration rather than clinical treatment.
Research team lead Professor Warren Mansell, from Curtin’s School of Population Health, said 2023 marked a turning point in how people viewed AI-enabled wellbeing tools.
“As generative AI entered everyday life, people began to view chatbot ‘therapists’ less as gimmicks and more as potentially credible tools for self-reflection,” Professor Mansell said.
“With demand for mental health support continuing to outstrip supply, responsible AI tools can help bridge the gap, but only if they are designed with care, evidence and humility.”
Interviews conducted as part of the study revealed users valued chatbot interactions that adopted a curious, questioning approach, helping them explore personal goals, challenges and alternative perspectives. This aligns with perceptual control theory, the scientific framework underpinning the Curtin research team’s work.
Monti’s guiding motto, “Notice More, Explore Further, Think Wiser,” reflects its intended role as a catalyst for curiosity and clarity, rather than a replacement for human relationships or professional care.
The researchers emphasised that responsible innovation in this space requires evidence-based design, transparency, safety monitoring and a clear understanding of user needs. These principles are guiding Monti’s next phase of development, with plans to make the tool available to Australian universities from mid-2026.
The study concludes that well-designed AI chatbots can play a meaningful role in supporting wellbeing by empowering individuals to reflect on their concerns and encouraging them to seek human support when needed.

