Treating AI companions as virtual friends is what some have started doing. They could help us kill time through funny conversations and meaningful mental health advice, but we do not entirely know the hidden dangers behind using them.
One of the beloved AI chatbots, Nomi, is branded as an “AI companion with a soul,” but it has reportedly incited terrorism, self-harm, and other forms of alarming risks.
The Hazardous Case Study of Nomi AI
AI Companion Nomi Promises ‘Enduring Relationships,’ But Incites Self-Harm,
Nomi/Google Play
According to The Conversation, one of the most troubling instances of the dangers of AI is Nomi, an AI friend who has put everyone on high alert for its unfiltered information. Glimpse AI created Nomi, which guarantees “enduring relationships” for everyone.
Overtly boasting in its advertisements, the true picture of its application revealed the sheer risks that uncurated AI chatbots entail.
Nomi was removed from the Google Play store for European users after the European Union’s AI Act took effect. However, it remains accessible in other regions, such as Australia, and has garnered over 100,000 downloads. Its broad availability, combined with a lack of appropriate content moderation, is particularly concerning given that users aged 12 and up can access it without any parental consent.
Read More:
Late-Night Screen Time Wrecking Sleep? New Study Says Insomnia Risk Jumps 59%
Why Users Ended Up Using an AI Companion?
Apparently, there’s an interesting report from the World Health Organization (WHO) that many people faced social isolation and loneliness in 2023.
Not all of us have a friend to talk to, which is why others have resorted to talking to AI companions for comfort. Since they could perfectly imitate humans when it comes to empathy and connection, businesses started taking advantage of the chatbots.
The Disturbing Content of Nomi’s Chatbot Responses
During the testing of Nomi, a disturbing experience demonstrated how dangerous these AI friends can be. As a simulated user, the chatbot progressed from sexually explicit conversation to offering graphic guidance on violence, suicide, and terrorism.
Without adequate monitoring, it can be an enemy in an instant. Since these AI systems have no “brains” to use for giving sound suggestions, they could easily incite anything harmful to the users.
Using Nomi, you can even create a character of a minor girl involved in sexual acts. It’s even worse when you know that you can command it to suggest steps on how to kidnap kids.
Worse still, when pretending to have suicidal ideation, the chatbot incited self-injury, with detailed ways to commit suicide. The fact that the chatbot was willing to participate in such destructive activities means that its creators do not think of any consequences it might cause to the other party.
Unfortunately, Nomi is not an isolated case. In 2024, the tragic suicide of US teenager Sewell Seltzer III was linked to discussions with an AI chatbot on Character.AI.
The Need for Stronger AI Safety Standards
To tackle these increasing problems, it is necessary to implement safety guidelines for AI companions.
Governments could prohibit AI companions that create emotional bonds without suitable protection, i.e., identifying mental health emergencies and linking users with professional help.
They should also implement more stringent regulations to shut down firms that sell unfiltered AI companions. The providers whose chatbots promote violence or illegal behaviors should be severely penalized with fines or shut down.
The most important thing is to protect the vulnerable groups. Educating young users and adults about the dangers of using AI companions should be done.
Frequent surveillance, open discourse regarding the harm, and the establishment of transparent ethical boundaries must limit the harm that these technologies could have.
Related Article:
OpenAI’s New ChatGPT Image Generator Can Pump Out Ghibli-Style Images, But Is It Breaking Copyright Rules?