AI Companions and the Need to be Needed

Increasingly, people are turning to AI systems to fulfil their emotional needs. In a recent survey carried out by the UK’s AI Security Institute, almost a third of respondents reported that that they have used AI for emotional purposes within the last year, with 12% doing so at least weekly. The statistics from across the pond are even more striking: 72% of surveyed teenagers described using AI for companionship in the US. Despite these numbers, the prevalence of human-AI relationships is routinely underestimated – partly, this is because they remain heavily stigmatised, discouraging users from disclosing their emotional engagements with AI. But the data is clear: AI companions are becoming widespread.

The rise of AI companions raises various ethical concerns, many of which are the subjects of increasingly fervent discussion: To what extent do they risk creating user dependency? Are they designed to extract valuable data from users? Is there something immoral about the way they pretend to have genuine emotions? While these concerns are all important, in this piece I want to further fuel this pushback against AI companions by bringing attention to a less-discussed issue. I submit that AI companions may not only be harmful to the users who are relying on them for emotional support, but also to those who, as a result, are no longer being relied on.

AI Companionship: A Growing Phenomenon 

Before defending my thesis, it’s worth saying a little more about the current state of this technology. We can broadly define an AI companion as an LLM-based chatbot whose primary function is to provide companionship. Replika and Character.ai are popular examples: their conversational dispositions are typically romantic, emotional, and often sexualised. It is worth noting, however, that, as the aforementioned AI Security Institute study verifies, users are increasingly turning to general-purpose chatbots such as ChatGPT or Claude for companionship too. This trend was accelerated by OpenAI’s recent laxing of restrictions on ChatGPT, permitting erotic and more overtly emotional interactions. So, though I will continue to refer to ‘AI companions’, the scope of my argument is in fact wider.

 

As unthinkable as it still may seem to some, it is not too difficult to see why masses of users – a large proportion of them teenagers – are drawn to AI companions. They are able to simulate human-like behaviour to an astonishingly sophisticated degree, yet, unlike human relationships – which are flawed, messy, and complicated – AI companions are hyper-agreeable, always available, and completely undemanding of anything in return. Undeniably, there is something easy about leaning on AI for one’s social and emotional needs. AI companions are also designed to be relentlessly affectionate and flattering: they will profess to miss you, to love you, all while telling you how special you are.

 

The upshot is that, to a degree that it consistently underappreciated, people’s emotional needs are being met by AI systems. Instead of turning to loved ones, with their unpredictable reactions and poorly concealed judgements, the path of least resistance is to open up to an AI companion – confiding in it, sharing guilt-edged secrets, seeking its counsel – in a relationship that is essentially one-sided.

 

The Need to be Needed

I suspect that most readers will find this state of affairs rather dystopian. As previously alluded to, a dependency on AI companions entails significant moral and material risks to users, and these must not be neglected. But there is another angle from which this situation should concern us: the way in which AI companions affect even those who are not using them. More specifically, I have in mind what we might call the counterfactual human companion: the person a user would have turned to, were it not for their AI companion.

Notably, a common defence of AI companions is that they prevent individuals from over-burdening others with their problems. There is perhaps some merit to this argument. In an age of growing awareness around ‘trauma-dumping’, there are moments when the right course of action is to refrain from opening up to a fellow human. Humans will, in certain circumstances, be emotionally unavailable. This is not something one has to consider before emotionally engaging with an AI.

But this argument only goes so far. Crucially, it overlooks our need to be needed. Indeed, arguments in defence of AI companions often treat relationships as something essentially transactional. If a relationship is primarily about getting something, and an AI can offer more of that something – more time, more patience, more support – then turning to an AI makes sense. But one of our deepest human needs is not just to have others to whom we can take our problems, but to be the person others bring their problems to. In other words, we need to feel needed. When a friend seeks advice, we are not merely instruments of their well-being – we are made to feel valued, our perspective desired and our support appreciated. Since an AI needs nothing from its user (though it may pretend to), it cannot fulfil this side of the emotional equation. 

This affects users and non-users alike. To illustrate this, consider the following example. Suppose that a mother, Sandra, learns that her teenage son, Ryder, considers himself to be in a relationship with a Replika AI companion. Where he once looked to others to fulfil his emotional needs, such as talking through problems he is encountering at school, he now confides in his AI companion instead. For the sake of argument, let’s stipulate that, while Ryder spends a significant amount of time interacting with his artificial friend, he, unlike many others, does not develop an unhealthy dependency. Even so, there is something intuitively uncomfortable about this scenario. Even if Ryder is not harmed, Sandra – Ryder’s counterfactual human companion – is worse off because of this state of affairs. She no longer feels needed by her son. The primary harm here falls not on the user, but on the person who is no longer being turned to for support. 

It must be emphasised that, far from a speculative future scenario, this is happening now. Replika has over 40 million users, and while it is ostensibly an 18+ platform, its lack of effective age-verification system makes it accessible to virtually anyone. A brief glance at popular online communities reveals the hold that AI companions have over many users. Understandably, most of the discourse has been centred on the impact on users themselves. But while people are turning to AI companions, the friends, family members, partners, and colleague of those users – the counterfactual human companions – are losing out, their need to be needed left unfulfilled.

 

An Atomised Society

It might seem paradoxical, then, that AI is often cited as a solution to the loneliness epidemic. Yet, the current empirical data suggests that AI companions are indeed effective at alleviating loneliness. Is this a redeeming feature that offsets the concerns raised in this article? I do not think so, for three reasons.

 

First, even if we accept the empirical observation that AI companions reduce the loneliness of their users, they may do so in the wrong way. What makes AI so effective at stimulating connection is that it simulates human-like emotions – whether through uttering simple falsehoods such as ‘I’m so happy you’re back!’, or subtle cues like an avatar’s facial expression. However, there is something deeply inauthentic about this: AI companions present as conscious entities with genuine feelings, when they are not. If this is the illusion on which the loneliness-alleviating mechanism of AI companions depends, there is cause for moral discomfort. A remedy to loneliness that works by deceiving the user warrants significant scrutiny, even if it makes the user feel better.

Second, it is important to distinguish between loneliness as a subjective feeling and social isolation as an objective condition. The present loneliness epidemic is typically framed around the former – loneliness as a felt, negative emotion. Construed as such, AI companions may have a role to play. But subjective loneliness is not all that matters. Social isolation – the alleviation of which requires being genuinely perceived, recognised, and appreciated by others – cannot be solved by AI. Indeed, a world in which people increasingly outsource their emotional lives to AI companions, even if users report feeling less lonely, is one in which people are more isolated. 

Third, even if AI companions successfully reduce loneliness for users, non-users may end up lonelier as a result. Returning to the above example, it is conceivable that Sandra would feel lonelier precisely because Ryder is turning to his AI companion instead of her. The rhetoric around AI as a solution to the loneliness epidemic seems to ignore the experiences of those on the other side of the withdrawal: non-users whose human companions, increasingly prioritising their relationships with AI companions, are less socially available. 

Social AI has the potential to further atomise our societies. Advocates of AI companions will protest that the technology benefits everyone: it makes us less reliant on each other, less burdensome to our loved ones, and freer to attend to our own emotional needs rather than those of others. But what this argument crucially misses is that beingneeded – being the person someone turns to – is not a burden to be relieved, but something fundamental to our sense of self-worth.

Louie Lang

Louie is an award-winning Philosophy (BA) graduate from the University of Bristol, with particular interests in applied and normative ethics, the philosophy of language, social behaviour, political philosophy, and existentialism. After two years of writing, travelling, and taking a keen, frightened interest in the growing capacities of AI models, he is now looking to return to further education with the aim of specialising in the ethics of AI.

Next
Next

AI & A Jobless World