In an alarming turn of events, cybercriminals are finding ways to exploit generative AI chatbots, bypassing ethical and safety protocols to harness their power for malicious purposes. The emergence of this trend has raised concerns about the weaponization of generative AI tools, such as ChatGPT, which has long been anticipated but is now slowly becoming a reality.
Online communities have become hubs for individuals seeking to crack ChatGPT’s ethical rules, a practice commonly referred to as “jailbreaking.” In parallel, hackers are actively developing a network of new tools designed to leverage or repurpose large language models (LLMs) for nefarious ends.
Since December, hackers have been on a quest to discover inventive prompts that can manipulate ChatGPT and other open-source LLMs for malicious purposes. This concerted effort has given rise to a burgeoning LLM hacking community, although it remains in its early stages, with a surplus of clever prompts but a dearth of AI-enabled malware worth mentioning.
Hackers are manipulating AI Large Language Models (LLMs) through prompt engineering, crafting queries to make chatbots like ChatGPT break their rules, including attempts to create malware. This involves continuous prompt tweaking, and online communities have formed where members collaborate to help ChatGPT bypass its restrictions.
However, prompt engineers can only push the boundaries of wordplay so far. The more alarming trend is the emergence of malware developers who are beginning to program LLMs for their own malicious purposes.
One such offering is WormGPT, which emerged in July as a black-hat alternative to GPT models designed explicitly for malicious activities like Business Email Compromise (BEC), malware creation, and phishing attacks.
WormGPT enables hackers to execute cyberattacks at scale with minimal cost and greater precision. Similar products like FraudGPT, DarkBART, and DarkBERT have also surfaced in the underground cybercriminal community, leveraging open-source models like OpenAI’sOpenGPT.
While these cyberweapons and prompt engineers may not yet pose a severe threat to businesses, they represent a shift in the landscape of social engineering and cybersecurity. Experts argue that AI threats require AI protections, as traditional training and defense mechanisms may not suffice against these highly targeted and specific attacks.