Annotations
A 2025 survey of 1,060 teens in the United States aged 13 to 17 found that 33 per cent used AI companions for social interaction, emotional support, conversation practice or role‑playing.
Another study, by researchers at Stanford University in California, found that people with a smaller social network are more likely to turn to AI companions for social and emotional support.
Last year in the journal Communications Psychology, researchers described the results of an experiment in which they asked 556 participants to rate responses from three sources: a chatbot (specifically an older model: ChatGPT-4), expert human crisis responders (such as hotline workers), and people with no expertise. The AI-generated responses were rated significantly more compassionate and more preferred than human responses.
‘Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions,’ he told an interviewer last year. ‘So really, people are getting the answers they want to hear.’
Due to the way LLMs such as ChatGPT or Claude process information, they can form nuanced responses to complex human problems. They can make people feel heard in ways they might never have before.
At first, users may be drawn into a relationship with an AI companion because they feel validated. But, over time, people consistently perceive human empathy as more emotionally satisfying and supportive. Is the ‘empathy gap’ why we fall out of love with AI? Do we eventually realise that talking to a chatbot isn’t as emotionally satisfying or supportive as interacting with actual people?
According to a recent study by researchers at MIT Media Lab and OpenAI, people who spend more time with chatbots are more likely to experience loneliness and are slightly less socially active in real life. Additionally, about half a dozen peer-reviewed studies have found a correlation between people who are at risk of emotionally depending on AI and people who are socially anxious, lonely or prone to rumination.
To get meaningful pushback, users need self-awareness, and must learn how to carefully challenge an AI companion without accidentally prompting it to argue just for the sake of disagreement – a task made harder when someone enjoys being told they’re right.