Brief - The Hustle

WormGPT is ChatGPT, but evil — how much should it worry you?

Written by Ben Berkley | Jul 19, 2023 10:56:48 PM

You know the trope: Something fishy’s going on with the protagonist, and here comes that tired twist — there’s an evil twin.

Gasp. So surprising.

This latest evil twin reveal will surprise nobody, but it’s pretty jarring all the same — ChatGPT has an evil twin that caters to cybercriminals.

WormGPT…

… was created by an unknown hacker and launched last month, with its designer calling it ChatGPT’s “biggest enemy.”

OpenAI built ethical guardrails into ChatGPT to prevent malicious activities; WormGPT has none of that.

This huge jerk of a chatbot proudly “lets you do all sorts of illegal stuff,” per PCMag.

How does it work?

Cybersecurity firm SlashNext put WormGPT to use:

  • Its primary purpose: generating convincing business email compromise (BEC) attacks, which use fake, personalized messages to access sensitive accounts.
  • SlashNext found the tool produced “sophisticated” phishing emails that were “remarkably persuasive” and “strategically cunning” with commendable grammar.
  • It also confirmed WormGPT has “no ethical boundaries or limitations.”
  • Like ChatGPT, the tool is easy to use, meaning cybercriminals with limited experience could find success with it.

… so, that all sucks.

But let’s keep those worries in check?

Unlike its more responsible sibling, WormGPT isn’t free: Access costs ~$617/year, limiting adoption.

Also slowing its growth: bad word-of-mouth. One buyer on the hacker forum where WormGPT is sold said it’s “not worth any dime.”

And while smarter BEC attacks present a challenge, their threat is nothing new — a leading cybercrime that cost victims $2.4B worldwide in 2021, BEC attacks are already a focus for cybersecurity experts.

  • Want to prepare yourself? The FBI offers tips on safeguarding accounts.

For now, WormGPT is less an active concern and more a sign of the looming fights ahead as AI increasingly gets in the hands of bad actors.