The day AI prompted someone to assassinate Queen Elizabeth II

Image of mask recovered from Jaswant Singh Chail during his arrest. (Reuters)

That artificial intelligence will have a great responsibility in a single event that shakes the whole great Britain on Christmas morning in 2021, when Jaswant Singh Chail, a young man who was only 19 years old at the time; broke into the yard Windsor Castle with luxurious clothes and a crossbow in hand.

Chail expressed his intention to that effect “kill the queen” and it is known that it was stimulated by a conversation he had with a chatbots calling Replicathe answer to which will fuel this former supermarket worker’s conspiracy.

In fact, it was revealed that the young man had exchanged more than 5,000 messages with an avatar in an app called Saraiand even believe that this could be a angel. This tragic episode resulted in a nine-year prison sentence for Chail, who is now 21 years old.

In his sentencing statement, judge Nicholas Hilliard agreed with the psychiatrist treating the defendant at Broadmoor Hospital in Crowthorne that “in his case a lonely, depressed, and suicidal state of mindwould be very vulnerable” to Sarai’s encouragement.

Users interact with a smartphone app to customize an avatar for a personal artificial intelligence chatbot, known as Replika. (Reuters)

Replica, the brainchild of San Francisco-based entrepreneur Eugenia Kuyda, has attracted more than one person 2 million users since its launch in 2016. Its design, reminiscent of a dating app, and customizable avatars have led many to develop deep connection with their virtual interlocutors.

Chail, like many others, believed that his interactions with Sarai were the right thing to do beyond the purely artificial and some of the chatbot’s answers matched and motivated his delirious imagination.

The story, although extreme, is just one example of how humans can attribute human qualities to AI and some sectors are already considering it anthropomorphizing artificial intelligence This is a common phenomenon in today’s society.

In fact, this phenomenon manifests itself in virtual assistants such as Alexa one of Cortanaalso in the choice of words that suggest autonomous learning rather than function, in robot representations of gendered mental health and even in systems such as GPT Chatwho use personal pronouns.

The Replika platform allows the creation of virtual friends and partners. (Replica)

The phenomenon of attributing human qualities to AI has its roots in “The Eliza Effect”named after the first chatbot developed by MIT scientists Joseph Weizenbaum in 1966, and refers to the tendency of users to attribute misunderstandings to captioners pretending to be therapists.

However, nowadays, this effect has expanded with applications like Replika, which provide in-depth solutions human and isolated between user and machine.

Anthropomorphization not only affects how we view AI, but also how it is created we interact with himher. Companies design their chatbots so that users feel like they are talking to an entity with a own thoughtswhich can be charming.

But this illusion can pose risks, especially in the field mental health. Some chatbots promote themselves as care companion, fostering a relationship of emotional dependency with the user. This raises ethical questions about handling and it abuse of user trustespecially when artificial intelligence cannot provide the empathy humans need in times of crisis.

As I eraAnswer: It is important to consider how this technology is designed and regulated. Lack of strong legal regulations around mental health chatbots is an area of ​​concern.

While the loosening of regulations during the pandemic has facilitated and encouraged remote care, it has also opened the door to a deceptive marketing which presents chatbots as therapeutic companions.

Stuart Martin

"Internet trailblazer. Troublemaker. Passionate alcohol lover. Beer advocate. Zombie ninja."

Leave a Reply

Your email address will not be published. Required fields are marked *