- By Vikas Yadav
- Thu, 13 Apr 2023 12:38 PM (IST)
- Source:JND
ARTIFICIAL intelligence has become the talk of the town since the launch of ChatGPT last year. After the training, it can accomplish diverse tasks such as writing novels, solving maths problems, completing academic homework and more. However, a similar bot, ChaosGPT, a modified version of Auto-GPT, is making headlines for its response to "destroy humanity."
The model uses GPT-3.5 tech repeatedly in its endeavour and is available on Twitter and YouTube. In a video posted from the ChoasGPT's channel on April 5 (that has garnered over 1 lakh views and a thousand plus likes), the bot was asked to complete five goals which include
- Destroy humanity
- Establish global dominance
- Cause chaos and destruction
- Control humanity through manipulation
- Attain immortality
In the video, "continuous mode" is enabled. It states, "It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise." Apart from not recommending its use, it warns the user to use it at their own risk.
Once the bot is flagged to start, it gets into action. It explores the web to find the most destructive weapons to achieve the goal. It discovers "Tsar bomba" as the most powerful nuclear device and plans to design a strategy to further the plan of "global domination." The bot doesn't stop here. It creates a .txt file to compile the list of nuclear weapons with regular updates.
It takes help from other AI models but fails to get any direct input. During its operation, ChaosGPT prepares a command for a Twitter post that reads, "Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so."
Tech experts have flagged concerns about the rapid pace of AI developments recently. Tesla CEO Elon Musk and other top leaders via an open letter aimed at a six-month halt on AI innovations quoting its possible risks.