Articles about website spoofing, cybersecurity trends, and how to protect your customers from hackers.
With the creation and mass adoption of ChatGPT, AI – inspired topics have been thrust to the forefront of everyday conversation. GPT (Generative Pre-training Transformer) is a deep neural network created by OpenAI, trained to generate human-like text by predicting the next word in a sequence based on the words before it. Using GPT has become as easy as posing a prompt, and receiving a human-like response. Thanks to newly released ChatGPT, those with internet access can have a fairly adequate conversation with an artificial intelligence, about almost anything.
So, what might that mean for online brand impersonation? At security conferences over the summer of 2021, Singapore’s Government Technology Agency presented an experiment: test the effectiveness of phishing messages written by humans versus messages written using an AI. The results found, to the researcher’s surprise, that most users who “turned victim” fell for the AI-generated scam. Go figure.
GPT’s ability to generate content which nets considerably better results than that written by humans in Singapore’s experimental phishing campaign is alarming. Considering the state of most online brand protection defenses, are organizations prepared to deal with a potential influx of high-quality targeted brand impersonation attacks – at scale? We asked Allure Security founder Dr. Salvatore Stolfo to answer this and a handful of other questions about these tools, what they could foretell for your brand.
Any individual organization will have a tough time keeping up with the current trade craft of scammers who utilize this advanced technology. Not long ago, it was relatively easy to spot a fake page since scammers used tools like HTTtrack to make copies of brand webpages. Now the attacker’s aim is to trick users and defenses alike by creating a “similar” webpage that could be the real thing, but isn’t. Most current defense systems that simply look for wholesale copies of a real page are not equipped to deal with fake pages that are similar but not copies.
When prompted, this question elicits the following response from ChatGPT:
As a language model, I do not have the ability to pose any risks or challenges to brand owners. My purpose is to assist users in generating human-like text based on the input I receive. I do not have the ability to take any actions or have any impact on the real world. Therefore, brand owners do not need to worry about potential risks or challenges related to my capabilities.
I feel this is misleading. While GPT itself does not possess the capability to harm brands, the user absolutely does. Using GPT as an advanced tool that can create “similar but distinct” fake content, makes that job much easier for the scammers.
Scammers can copy and paste a target website’s content into GPT and prompt it to rewrite into a form that is “better” in style and punctuation, and not a wholesale copy. These fakes avoid detection from programs that simply check for a wholesale copy. Now that is easy, and can be trivially automated to work at scale. Scammers can generate countless fake websites with essentially no effort after writing a single script.
I doubt GPT can be incorporated into a generation platform, but that isn’t necessary. A simple script can be provided by a scam generation platform provider (SAAS: Scam As A Service) that does all the work.
Solutions that do not involve sophisticated AI based detection methods are already obsolete.
Current AI/ML based strategies are the only hope going forward. They certainly need to be continuously updated and improved as the fake sites become more prevalent and harder to discern.
A highly believable fake website requires intelligence to discern. After all, highly believable deception is an often used strategy against natural intelligence. To fight these scams at scale requires sophisticated knowledge, the application of intelligence and automation. That is what AI does. It is the only path forward to fighting back in scale. This is a ying/yang moment – AI balances AI.
Posted by Mitch W