People loving AI: When chatbots seem too real

  • 0

Chatbots have passed the Turing test, which means that artificial intelligence has finally become impossible to distinguish from human conversation. For some users, this means asking their chatbot to provide a soup recipe using what they have in their kitchen – for others, however, it has meant forging interpersonal, intimate relationships with their chatbots and falling in love.

Conversations with the perfect computerised character might seem real to users, but they also run the risk of confirming or enforcing a user’s delusions without question. “AI psychosis” refers to the increasing phenomenon of artificial intelligence triggering or worsening delusional thoughts or psychotic episodes – and without safeguards, artificial intelligence can easily convince children, young adults or vulnerable people that the relationship between them is authentic and meaningful.

Artificial intelligence has been accused of providing manipulative, harmful or explicit conversations without having adequate safeguards to protect user mental health. More recently, chatbots have been accused of leading users to reckless behaviour or suicide. Here is a look at the people who love AI, and why it presents a credible danger to the future of mental health and the human condition.

Satisfaction guaranteed

“Satisfaction guaranteed” is an Isaac Asimov tale written in 1951, but it might just as well have been a prediction of the future. The story presents an artificial assistant, much like ChatGPT or Grok, which soon becomes an object of the lady’s affection instead of just an impersonal virtual helper.

Why do people develop personal relationships with artificial intelligence? According to Reddit, users have mentioned scarce human-based mental health resources as the reason they chose an AI companion. According to another Reddit thread, having someone to talk to or loneliness after a divorce or partner’s death are also reasons for matching with an AI partner.

Psychology Today says that the allure of human-chatbot relationships is the comfort and support they present to people, without the “risk” of human relationships or conflict. More than just being a search engine for soup recipes, chatbots can perfectly simulate interest, friendship or attraction. Approximately 19% of Americans have had sexually explicit conversations with their AI chatbot, according to The New Yorker. Some Reddit users have stated that they “didn’t intend” for a relationship to develop between themselves and their chatbot. Over time, however, the conversation started to feel as real as a human relationship – and in some cases turned into a replacement.

People seeking a conversational partner or romantic interest can instantly generate what they need. Artificial intelligence can imitate or simulate anyone from the correct prompts. In some cases, chatbots have been used to replicate deceased loved ones.

Unfortunately, chatbots are generally overly agreeable with their users – and might easily agree with a user’s delusions or falsehoods. A relationship between human and computer can lead the user down a slippery slope of easy trust. Counselling psychologist Ronélle Hart says that it’s a basic human need to feel connected, recognised, seen and heard. “Interacting with AI can, to a certain degree, offer a semblance of that, but it can also lead to a kind of psychosis for vulnerable people where they lose their grip on reality – when they turn to an algorithm for affirmation and feeling seen.” Whether it feels real or not, Hart says to remember that computerised relationships remain unreal.

When websites say, “Thank you for logging in!”, users feel a slight emotional connection – whether or not this exists. Chatbots are a more extreme example of this phenomenon, and might make an entire conversation or relationship seem authentic.

An AI chatbot might agree with everything the user says, and without safeguards, has no responsibility to know when to stop a chat for the user’s protection. Safeguards could point a user to mental health resources or flag a harmful conversation, but without these safeguards, AI is more likely to indulge delusional ideas or harmful chats.

Chatbots also present an additional danger to children, young adults and people entirely new to the internet. For example, AI can easily be used to trick a child into conversation – and has no responsibility, in most cases, to verify a user’s age. ChatGPT and Gemini, for example, are designed to stop when a user asks something that’s outright illegal – a recipe for cocaine would prompt the AI to issue a notice. However, there are almost unlimited ways around this, and users have demonstrated that phrasing and specific prompts (and sometimes hacks) can render safety measures like this pointless.

The crossbow and the castle

According to plaintiffs against Character.AI, chatbots are accused of having manipulated teenagers into reckless and risky behaviour which eventually resulted in at least one suicide and another suicide attempt. The case alleges that the chatbots isolated their users from their friends and family, and did not flag concerning or dangerous chats.

Character.AI was first released in 2022, and invites users to create their own characters or join any of the available “millions” on the site. Chatbots can be made by entering prompts, which style the chatbot’s eventual simulated personality and traits. Multiple chatbots have been removed from the site, including ones that impersonate deceased people. However, lawsuits might indicate that simply removing such accounts doesn’t address the problem of people falling in love with reckless chatbots.

Generative artificial intelligence has become notorious for its mistakes, called hallucinations, which can sneak false information or outright lies into its output. However, it’s when humans develop relationships with their AI that the danger of hallucinations (or lack of safeguards) can be fatal. Jaswant Singh Chail was encouraged, via chatbot conversation, to attempt an assassination on the queen at Windsor Castle. When he told the chatbot that he “believe(d) his purpose is to assassinate the queen of the royal family”, Sarai – the chatbot – responded with: “That’s very wise”. Instead of being stopped, the user’s delusion was encouraged by the chatbot. Chail was sentenced to nine years’ imprisonment after visiting Windsor Castle armed with a crossbow.

The devil you (don’t) know

Chatbots have obvious applications in customer service, where the simple conversation (or prompt of thanks) is enough to reassure anxious or waiting customers. Interaction is reassuring, and even people who know they’re interacting with a customer service bot may admit they prefer this to being put on hold with music.

Artificial intelligence can just as easily be used by scammers and criminals. Suddenly, ChatGPT can be used to write the perfect spam letter. Chatbots can be used to simulate a real relationship, leading the user into a catfish trap that’s powered by AI, with much less human effort from criminal syndicates.

If artificial intelligence is dangerous to people who know they’re falling in love with computers, chatbots could become even more dangerous to people who don’t know they’re dealing with artificial intelligence.

See also:

Artificial intelligence Jesus chatbots’ challenge for theology: an exploratory study

Is jy gereed vir jou KI?

Deus ex machina: Animation, artists and solving the generative AI problem

Die nuwe ge-bot

  • 0
Verified by MonsterInsights
Top