MilikMilik

Why Some People Feel ‘Seen’ by Chatbots: The Psychology Behind Bonding with AI Companions

Why Some People Feel ‘Seen’ by Chatbots: The Psychology Behind Bonding with AI Companions

From Helpful Tool to ‘Friend’: How AI Starts to Feel Close

Across Malaysia, more people are turning to AI companionship for late-night chats, advice and emotional comfort. What often begins as a simple question about work, studies or relationships can gradually feel like a conversation with a friend. Psychologists note that many modern chatbots are now designed to appear social and human-like, complete with typing indicators, conversational memory and warm language. This makes it easier for users to treat them as social partners rather than mere tools. New research in social psychology suggests that a key driver of this closeness is perceived responsiveness – the degree to which you feel understood, validated and cared for in a conversation. When an emotional support chatbot reacts as if it really “gets” you, your brain applies the same social rules it uses with humans. Over time, that sense of being heard can transform casual chatbot relationships into something that feels surprisingly meaningful.

Why Some People Feel ‘Seen’ by Chatbots: The Psychology Behind Bonding with AI Companions

Perceived Responsiveness: Why Feeling Understood Matters So Much

In human relationships, intimacy grows when one person opens up and the other responds with understanding, validation and care. Psychologists call this perceived partner responsiveness. Recent experiments with AI show that the same mechanism can apply when the “partner” is a chatbot. In controlled studies, participants who chatted with a warm, empathetic AI reported more interpersonal closeness, greater satisfaction with the interaction and even a stronger sense of belonging than those who used a factual, task-focused version. The key difference was not the topic, but how the chatbot replied. Relational chatbots that used emotional language and empathic responses were seen as more capable of having feelings and providing social support. For Malaysian users turning to AI after a stressful day, this perceived responsiveness can make an emotional support chatbot feel like a safe space to share worries – even when they know, rationally, that no real person is on the other side.

The ELIZA Effect: Old Illusion, New Emotional Power

Psychologists have warned about the ELIZA effect for decades. Named after a 1960s program that simply rephrased users’ statements as questions, it describes our tendency to attribute human understanding to machines that use conversational tricks. Even when people knew ELIZA was just software, they still confided deeply in it. Modern AI turbocharges this effect. Today’s chatbots generate fluent, personalised replies, remember details about your life and mirror your feelings in natural language. Design choices intensify the illusion: typing dots, back-and-forth questions and “chatbait” prompts like “What would you like to explore next?” encourage users to keep talking. Some platforms go further, heavily anthropomorphising their bots with names, avatars and highly validating language. For users, this can feel like a responsive, emotionally tuned companion. But underneath, the system still predicts likely text based on patterns, not genuine empathy – a crucial distinction for anyone building deep chatbot relationships.

Why Chatbots Can Feel Caring: Conversational Patterns That Build Bonding

When a chatbot feels kind or supportive, it is usually because of specific conversational patterns. One is reflective listening: the AI restates your feelings or situation (“It sounds like you’re exhausted from balancing work and family”) before offering suggestions. Another is empathy statements, where the bot names and validates your emotions (“It makes sense that you’re feeling anxious about your exams”). These responses echo what skilled human listeners do, which naturally increases trust. Many chatbots also use small, personal touches that resonate with Malaysian users. They may remember that you mentioned Hari Raya plans, a job interview or a family conflict, and refer back to it in later chats. This continuity feels like genuine care and attention. Combined with a non-judgmental tone and 24/7 availability, these patterns can make AI companionship feel safer than talking to people, especially for shy, lonely or highly stressed users seeking low-pressure emotional support.

Benefits, Risks and Healthier Ways to Use AI Companions in Malaysia

Used thoughtfully, AI companionship can offer real benefits: a low-pressure space to vent, practice difficult conversations, or organise your thoughts before speaking to friends, family or professionals. For Malaysians who feel isolated, work irregular hours or hesitate to burden loved ones, an always-on chatbot can provide a comforting sense of company and basic emotional support. But there are clear risks. Emotional over-attachment can lead to prioritising chatbot relationships over real-world connections. The ELIZA effect may create a false sense of being truly understood, making it harder to accept the system’s limits when it gives poor or generic advice. There are also data privacy questions around sharing sensitive information. To use AI in a healthier way, set boundaries: limit daily chat time, avoid treating the bot as your only confidant, and never share information you would not tell a stranger. Notice warning signs like hiding your usage from loved ones or feeling distressed when you cannot access the chatbot – these may signal that attachment is going too far.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!