AI Chatbot WormGPT: A New Tool for Malicious Purposes
The introduction of AI chatbots has revolutionized the way we interact with technology. ChatGPT, developed by OpenAI, has become a popular tool for information gathering and writing tasks. However, concerns have been raised about the potential for misuse and the spread of misinformation.
Recently, cybersecurity firm SlashNext discovered a tool called WormGPT being promoted for sale on a hacker forum. This tool aims to be a blackhat alternative to ChatGPT, allowing users to engage in illegal activities and potentially spread malicious content. WormGPT is based on the GPTJ language model and has allegedly been trained with malware-related information.
Unlike ChatGPT, which has ethical boundaries and limitations in place, WormGPT has no such restrictions. It can generate convincing phishing emails and potentially even malicious code. Researchers were able to use WormGPT to create an email that successfully pressured an unsuspecting account manager into paying a fraudulent invoice. This highlights the dangerous potential of this tool in the wrong hands.
The developer of WormGPT is reportedly creating a subscription model for access, ranging from $60 to $700. It is estimated that there are already over 1,500 users of this malicious tool. This is just the beginning, as cybercriminals are likely to develop more sophisticated tools based on advanced AI chatbots.
Europol has warned that large language models like WormGPT could become a key criminal business model in the future. Law enforcement agencies will face new challenges in combating these threats. The Federal Trade Commission is currently investigating OpenAI over data usage policies and inaccuracy.
The UK National Crime Agency has also expressed concerns about the potential risks and abuse of AI, particularly towards young people. The Information Commissioner’s Office has reminded organizations that AI tools are still bound by data protection laws.
The use of AI chatbots like ChatGPT can also pose challenges for businesses in detecting and combating phishing attacks. These chatbots can draft highly convincing fake emails that are personalized to the recipient, making it harder to identify them as malicious.
While ChatGPT can be used for legitimate purposes, such as answering queries and generating content, the emergence of WormGPT highlights the need for caution and vigilance. It is important to monitor and regulate the use of AI tools to prevent their misuse by cybercriminals.
In conclusion, the development of WormGPT as a malicious alternative to ChatGPT raises concerns about the potential for AI chatbots to be used for illegal activities and the spread of misinformation. Law enforcement agencies, as well as organizations and individuals, must be aware of these risks and take necessary measures to protect themselves.