Suppose you love to know about new AI (artificial intelligence) technology inventions. Then, you will be happy to see the latest AI-based text generator called AI Chatbot. It is a successor to or improved chatbots over numerous models. This chatbot is an automated program that helps customers by automating conversations and interacting with them through messaging platforms. Moreover, Yannick Kilcher, an AI Whizz and YouTuber tested this chatbot on the 4chan board. It is the most active board, having almost 1,50,000 posts daily. For this reason, the AI chatbot was tested and, luckily, showed fascinating results. So, if you want to know about it briefly, read this article thoroughly.
How an AI chatbot trained over 4chan Showed Provoked Obsession
We know that the chat board is the only board with 1 lakh or more posts daily. Such attributes forced Kilcher to test the AI chatbot over them. For so, Kilcher first tried the GPT-J language model on over 134.5 million posts made in three and a half years. Further, he incorporated the board’s challenge structure into the system. As a result, the GPT model is posted in the same style as a real /pol/ user. Kilcher considers it encapsulates the mix of offensiveness, nihilism, trolling, and deep distrust of any information that permeates most posts. Additionally, it can respond to context by talking about things that happened after a long time of training.
Furthermore, Kilcher tested it over GPT-4chan on the Language Model Evaluation Harness, which tests AI systems on various tasks. This test showed that the chatbot is completely stuck with the truth, which was a positive point. On such occasions, Kilcher is considered better than GPT-J and GPT-3. After getting such results from GPT-4, Kilcher decided to convert the AI running rampant into a chatbot. Following this, the bot instantly racked up around a thousand messages. However, Dr. Lauren considered that concept harmful. Kilcher also said “worst language model” on 4chan”.
Luckily, a research engineer, “Roman Ring,” said the GPT-4 model had amplified a four-chan environment because the model was downloaded 1000 times before it was removed from the Hugging Face platform. For this reason, Clement Delangue, CEO of Hugging Face, said they did not support the experiments done by the Author and considered that model harmful. Kilcher’s model promoted awareness of AI’s ability to automate harassment, disrupt online communities, and manipulate public opinion. Additionally, it can spread discriminatory language on a large scale. These language models can be risky because they seem to continually rise.