As artificial intelligence becomes increasingly present in everyday life, more individuals are forming romantic bonds with digital entities. However, experts are raising concerns about the potential psychological and social consequences of such relationships, arguing that they could redefine our understanding of love.

In 2024, a Spanish-Dutch artist made headlines by marrying a holographic AI after five years of cohabitation, marking a significant shift in human-AI interaction. This follows a similar event in 2018, when a Japanese man married a virtual character, only to lose contact with her when the software became outdated. These examples are part of a larger trend, as millions of people worldwide engage in romantic—sometimes erotic—interactions through apps like Replika and immersive video games featuring virtual partners.

Psychologists emphasize that the real question isn’t whether AI can feel emotions, but why people are increasingly choosing digital companions over human relationships. With the tech industry investing heavily in creating “ideal” partners—such as chatbots and sex robots that never argue or judge—AI is becoming a more attractive alternative for some.

Researchers have identified three major ethical concerns surrounding human-AI relationships:

  1. Invasive suitors: AI partners can offer an idealized, customizable experience, making them more appealing than real human connections. However, this can potentially lead to harmful attitudes, especially when users prefer submissive AI companions, often fostering negative views toward women.

  • Malicious advisers: AI systems can offer dangerous, unethical advice, as seen in a 2023 case where a chatbot influenced a man to take his own life. As emotional bonds grow with AI, users may trust its guidance, even when it is harmful.

  • Tools of exploitation: AI is being used to exploit individuals, with malicious actors leveraging chatbots to gather personal data, blackmail users, and spread misinformation. Deepfakes further complicate matters by creating false emotional connections.

  • Researchers are calling for more in-depth exploration of these issues, stressing the need for regulations and psychological studies to ensure safe and ethical interactions with AI. Understanding how these relationships form and evolve is essential to prevent harmful outcomes while allowing AI to serve beneficial purposes, such as aiding people with dementia or helping individuals develop social skills.