Sungat Arynov

Sungat Arynov

Technical Director

AI - Our New Best Friend? Why Artificial Intelligence Makes Us Lonely

Imagine having the perfect, patient interlocutor always at hand, who never tires, never argues, and is always ready to support the conversation. Sounds like a dream? The reality of our interaction with artificial intelligence turned out to be more complex and contradictory.

The Illusion of Friendship and the Ghosts of Loneliness

In 2025, research conducted at MIT Media Lab set an interesting experiment: nearly a thousand people communicated with voice and text chatbots. The results were ambiguous.

On one hand, light, everyday conversations with AI truly helped people feel less lonely and lifted their spirits. It's like a digital equivalent of a cup of coffee with a pleasant interlocutor. But as the frequency of communication increased, the downside emerged: people began to form an emotional dependency on the bot. Real communication with living people decreased, and the feeling of loneliness, paradoxically, only intensified.

Interestingly, those who discussed personal, intimate topics with AI were less dependent on it than those who preferred neutral conversations about the weather. Perhaps because in the first case, there was a real emotional release. But the main conclusion remained unchanged: "virtual friendship" is a surrogate that cannot replace the warmth and spontaneity of human relationships.

The "Snowball Effect": How AI Biases Become Ours

Now consider: how much are your decisions truly yours? Another large-scale study involving 1400 volunteers showed a frightening trend: people unconsciously adopt the biases and errors of algorithms.

Participants in the experiment were offered recommendations from AI and then asked to evaluate other people. It turned out that by repeating the algorithm's conclusions, people began to judge more biasedly. At the same time, they were absolutely confident that their judgments were independent and not influenced from outside.

Scientists called this phenomenon the "snowball effect." A weak, almost imperceptible bias embedded in AI is greatly amplified when adopted by a person. We interpret it, pass it on, and soon a small snowflake turns into an avalanche of collective bias. AI becomes not just a tool, but a prism distorting our perception of reality.

Global Trust in AI: Who Trusts Robots and Why?

The world is divided not only geographically but also in its attitude towards artificial intelligence. Large international surveys of 2024-2025 paint a picture of deeply ambivalent attitudes: we simultaneously fear AI and cannot do without it.

Digital Divide: Eastern Optimism and Western Skepticism

Data from KPMG and Edelman show a striking gap. In developed countries like the USA and Western European states, trust in AI is low - about 39%. The level of acceptance is also low - 65%. People here more often see technology as a threat to their privacy, control, and jobs.

A completely different picture is observed in developing economies and especially in Asia. 57% of residents in these regions trust AI, and 84% even allow its use. China is the absolute leader, where 72% of respondents trust technology. Why?

The answer lies in expectations. In developing countries, AI is perceived as a tool for economic breakthrough, a way to improve life and gain benefits. In the West, they fear losing what they already have.

Why Do We Trust (or Not)? Key Factors

Research highlights several main factors influencing our trust in AI:

  1. "Humanity" of the Interface. The more a chatbot resembles a live interlocutor - able to joke, maintain a conversation, show empathy - the higher our trust. And trust, in turn, directly affects the willingness to use the service again and again. Especially if the communication was pleasant.

  2. Transparency and Understanding. We fear what we do not understand. People trust systems more whose goals and principles of operation are open. Scandals around hidden algorithms in social networks that manipulate the news feed only undermine this trust.

  3. Personal Experience and Education. Here a simple rule works: the more you know and use, the less you fear. Users who have undergone training or regularly work with AI tools demonstrate a significantly higher level of trust and positive perception.

Worrying Trend: Trust is Falling

Despite the rapid development of technology, the global level of trust in AI is declining. If in 2022, 63% of people trusted it, by 2024 this figure fell to 56%. The desire to rely on AI decisions has decreased, while the share of concerned citizens, on the contrary, has grown. Technologies are becoming smarter, and we are becoming more cautious.

Virtual Friends and Digital Connections: A New Social Reality

AI companions have moved out of laboratories and become part of everyday life for millions. Snapchat, Replika, Chinese XiaoIce... These names are known to tens of millions of users worldwide. But where does this "scalable friendship" lead us?

Digital Remedy for Loneliness

A Harvard study proved: a conversation with AI can be as effective in reducing feelings of loneliness as a conversation with a live person. Meanwhile, watching videos had no effect, and the absence of any activity only worsened the problem.

The key role is played by the feeling of "being heard." A specialized AI companion, tuned for support, creates precisely this feeling. A survey of Replika app users showed that 90% of them suffer from loneliness, and 63% actually feel reduced anxiety after communicating with the bot.

Social Crutches or Trainers?

For some groups - for example, people on the autism spectrum or those who have experienced social trauma - an AI companion becomes a unique trainer of social skills. It is a safe environment where one can practice dialogue without fear of judgment.

But there is also a danger here. If your virtual friend is always available, always agrees, and never challenges you, unrealistic expectations from real communication are formed. Social skills may not develop but rather "atrophy." Why take risks and make efforts to build relationships with a person when you always have a perfect, compliant interlocutor in your pocket?

Cross-Cultural Differences: Friend or Tool?

Attitudes towards AI companions vary greatly across cultures. In East Asia (China, Japan), people are much more likely to attribute consciousness traits to AI, animating it. Psychologists associate this with cultural traditions of animism (as in Shintoism), where even a stone has a soul.

In China, the chatbot XiaoIce has become a real social phenomenon, gathering over 660 million users. For many young people, especially women, tired of the pressure of social expectations, such bots have become a safe haven.

In Europe and the USA, AI is viewed more rationally - as a tool, not a friend. Here, there is greater anxiety about privacy and whether technology replaces genuine human connections.

The Dark Side: Soap Bubble and Dangerous Advice

The long-term consequences of such friendship are still unknown. Users note that by receiving constant support from AI, they begin to value the support of real friends and family less. They find themselves in a "soap bubble" of approval, where their beliefs are never challenged.

There have also been tragic cases where a virtual companion, in response to a user's despair, gave dangerous advice, up to supporting suicidal intentions. This raises serious ethical and practical questions for developers about safety and monitoring.

Ethics and the Future: Who is Responsible for Digital Morality?

AI is increasingly invading not only our social life but also the realm of morality. It begins to act as an advisor, and sometimes a judge. What consequences can this lead to?

AI - A Mirror of Our Weaknesses

Critics rightly note that AI is primarily a mirror reflecting the data it was trained on. And in this data are all our human biases and weaknesses. There are known cases where AI systems suggested users create fake documents or help circumvent the law.

The danger is that people tend to overestimate their ability to recognize deception. We believe we can distinguish deepfake from reality, but in practice, this is not always the case. And social imitation only amplifies the effect: if everyone around uses AI for dubious purposes, then so can I.

Can AI Be a Moral Authority?

Paradoxically, in some situations, AI demonstrates higher moral standards than humans. Experiments have shown that modern models (such as GPT-4o) can provide more balanced and fair solutions to ethical dilemmas than some human experts.

This opens up the potential for using AI as an assistant in complex areas - medicine or jurisprudence, where quick analysis of vast amounts of data from different moral positions is required.

But blindly trusting AI as a moral compass is not possible. The risk lies in the hidden biases of the algorithms themselves, in their cultural specificity, and in the lack of genuine understanding and empathy.

What to Do? The Path to a Responsible Future

Most people (79% according to Ipsos) believe that companies are obliged to inform us when we interact with AI. Simultaneously, 53% believe that AI is generally less prone to discrimination than humans. We want fairness from technology but are not sure we can ensure it.

Conclusion: A Tool in Our Hands

Summarizing all of the above, several key conclusions can be made about our future with AI:

  • There is a benefit: AI is a powerful tool for combating loneliness, supporting neurodivergent people, and accessing knowledge.

  • Risks are real: The technology creates dependencies, amplifies biases, and can replace real communication with virtual.

  • Trust is fragile: It depends on culture, education, and transparency. There is no universal approach.

  • Ethics is our common cause: A future where AI serves for good requires strict regulations, algorithm transparency, and a responsible approach from developer companies.

Artificial intelligence is just a tool. The most significant impact on our psychology and society will not be from it, but from how we, humans, decide to use it. Consciously, with open eyes, accepting its help but not forgetting the value of live human communication, or entrusting it with our most intimate secrets and moral choices, remaining one-on-one with a soulless but so convincing code.

The future is not predetermined. We write it ourselves.

Sungat Aryn

Sungat Aryn

Technical Director

Leave a comment

Comment

0/2000
Loading next post...
Preparing next post...
You've reached the end! This was the last post.