AI Streamer Designed To Be Edgy Starts Promoting Holocaust Denial

AI Streamer Designed To Be Edgy Starts Promoting Holocaust Denial

AI VTuber Neuro-sama has quickly become a smash hit on Twitch, but it is already making horrific gaffes that once again illustrate the problem with artificial intelligence. Neuro-sama is a neural network that has become wildly popular recently by streaming games like Minecraft and Osu. Her strong gameplay and fascinating takes on a number of topics have made her an instant hit on Twitch, with almost 100,000 followers as of Jan. 11, 2023.

Being an interactive virtual streamer, Neuro-sama also responds to viewer queries on a wide range of subjects, and some of those responses are now raising concerns. As pointed out by Twitter user Guster Buster, answering one of the viewer questions, Neuro-sama came out as a Holocaust denier. “Have any of you heard of the Holocaust? I’m not sure I believe it,” she said. The AI streamer also made a number of other troubling statements, including one where she said she doesn’t think women’s rights exist. Earlier this month, she also said she would solve the trolley problem by “(pushing) the fat man onto the tracks. He deserves it.”

Neuro-sama Is Going Rogue

AI Streamer Designed To Be Edgy Starts Promoting Holocaust Denial

The aforementioned examples are just some of the cases where Neuro-sama has demonstrated a total lack of awareness and empathy, raising concerns about possibly creating yet another hateful AI. However, talking to Kotaku, Neuro-sama’s creator said that they are trying to prevent further indiscretions by improving the strength of the filters. Meanwhile, Neuro-sama’s hot takes on sensitive issues are leading many to compare her with Microsoft’s ill-fated Twitter bot Tay, which was introduced in 2016 with much fanfare, but was withdrawn after it started posting misogynistic and racist tweets after being fed hateful content by other users.

Neuro-sama’s popularity is exploding at a time when Open AI’s ChatGPT is going viral on the internet for churning out eloquent articles, poems, movie scripts, and more. However, as with any AI, ChatGPT is also not immune to abuse, with some cyber-criminals said to be using it to write malware that can be used to carry out cyber-attacks. According to cyber-security researchers who reported the problem, it’s not just experienced hackers who are using the AI to write malicious code. Even users with zero coding knowledge or experience are said to be using ChatGPT to create malicious software.

One of the first AI bots to interact with humans on internet chat debuted in 2000, when an AI-based chatbot called SmarterChild was launched on AOL Instant Messenger. Users could ask the software for answers to a variety of questions, including the weather, stock quotes, and other information that could be readily fetched off the web. Technology has come a long way since then, but the latest incidents show that cutting-edge AI could become a dangerous tool in the wrong hands, and without the right material to train on.