Magazine Button
Can ChatGPT weaponize phishing?

Can ChatGPT weaponize phishing?

BlogsEditor's ChoiceEnterprise SecurityEuropeNorth AmericaTop Stories

With 450,000 malwares detected every day and staggering 3.4 billion daily phishing emails entering inboxes, the addition of ChatGPT is likely to take attack and defence to a new level explains David Hoelzer at SANS Institute.

The launch of ChatGPT took the world by storm creating a host of new opportunities and challenges, virtually overnight. With capabilities to generate infinite realistic responses and to perform a host of useful and creative applications it’s no wonder the tool captured the attention of millions.

From simple email drafting, to helping students pass exams, to writing code, or even just generating song lyrics and jokes, new AI tools have a lot to offer a lot of people from any walk of life with limitless applications for everything from work to leisure time.

David Hoelzer, SANS Fellow and AI Expert at SANS Institute

The technology sophistication has wowed the world but as the dust settles and the hype calms, inevitable questions on what these tools mean for the future of cybersecurity are fast clouding the knee-jerk enthusiasm. In our always-on digital world opportunistic hackers are known to take advantage of any new tool for their own benefit and the reality of technology is that nothing is ever completely fool proof.

Technology is built by humans and with human beings comes human error, so technology advancement presents yet more new toys for hackers to play with. So, what are the actual drawbacks and how can we stay on the front foot to ensure AI brings benefits and not just another cybersecurity pitfall to swerve?

Chatbot misuse

ChatGPT has gripped the world but amid the hype and enormous potential, it is important to come to grips with the reality and understand the real impact of advanced AI solutions. While the machine learning chatbot has game-changing capabilities, it is also plumping up hackers’ toolkits, with criminals able to find ways to use, abuse, and trick the system into performing tasks that play right into the hackers’ hands.

With 450,000 new pieces of malware detected every day and a staggering 3.4 billion daily phishing emails entering our inboxes, attacks of this nature have become so commonplace and sophisticated that they are harder than ever to detect.

Now the excitement of the AI tools has settled people are beginning to question the security element, and just like any change to the ways we work and behave, along with the buzz comes the promise of security threats as cybercriminals will look to exploit and expand their hacker toolkits.

Exploiting tools

The easiest and most commonplace application of AI chatbots for cybercriminals will be generating sophisticated and persuasive phishing emails. The Dubai Police recently warned against phishing scams in the form of emails urging recipients to pay fines and service fees.

In days gone by, if an email had typos, then it set phishing alarm bells ringing. Now it’s the opposite, and we advise all to look for typos as a positive sign that the email’s probably from a human!

You can tell the chatbots to be imperfect by asking it to sprinkle text with a couple of typos, so it depends on what stage cybercriminals are at in teaching it to perform phishing tasks. As well as this, spam is one of the first places cybercriminals will take this since it’s one of the fastest things the model can do.

Research has revealed that AI chatbots are currently easily influenced by text prompts embedded on web pages. So, cybercriminals can use indirect prompt injection – where they secretively embed instructions in a webpage. If a user unknowingly asks a chatbot to ingest a page, this can activate the placed prompt.

Researchers even found that Bing’s chatbot can detect other tabs open on a user’s device, so hackers simply need to embed the instructions on any webpage open in a tab. Cybercriminals can then easily manipulate the user through the AI tool and could attempt to obtain sensitive information such as your name, email address and credit card details.

Privacy concerns

These technological advances come with risks in the form of bias, misinformation, privacy concerns, automated attacks, and even malicious use. Search engines already represent a well-known privacy risk in that any information that is unsecured or publicly available on a site that is scraped by the search engine will potentially be indexed.

To some extent, this has been mitigated over the years as search engine companies have recognised certain patterns that are particularly damaging and actively do not index them, or at least do not allow public searches for that information. An example would be social security numbers.

On the other hand, Chatbots, or more generally, AI tools trained on something like CommonCrawl or The Pile, which are large, curated, scrapes of the Internet at large, represent fewer familiar threats. Especially when we are thinking about large-scale models like LLaMa, ChatGPT, the potential for AI to be able to generate accurate personal data for some number of individuals given the proper prompt is real.

The good news is that, since the responses are being generated based on probabilities rather than recalled from scraped data, it is much less likely that all of the data is accurate.

In other words, the risk in a search engine remains greater for a larger percentage of the population. The risk to a smaller number of people might be higher in an AI but is somewhat alleviated by not knowing beforehand which individuals the AI might be able to generate accurate information about.

Should we be worried?

It’s important to remember that ChatGPT is not learning at the moment, it is just doing predictions of the entire history of your chat. It cannot currently be directed to automate ransomware attacks. It is a research tool created to show the world what is possible, see how people use it, and explore potential commercial uses. It is key to remember that we are not indirectly training AI chatbots every time we use them as some people may assume.

OpenAI wants to see what ChatGPT does, what it is capable of and how different people use it. The creators want to give AI to everyone. However, their concern is that if only a small number of humans have AI capabilities, those people will ultimately become superhumans. Democratising access to AI and its real-world security benefits will minimise the risk of only a select few having these extra capabilities.

This is why it is so important for the threat hunting and security teams in organisations to understand how these tools work and what the realities of the technologies are. Without knowledge, it can be very difficult for teams to keep management informed about what the real risks and opportunities are.

Businesses could even soon be using AI as a force for good by preventing cyberattacks through phishing, as the tools could be trained to recognise the language generally used by staff and therefore detect any deviations from this from outside threat actors.

The possibilities generative AI can bring for the future are exciting and transformative. However, it’s important to not lose sight of the threats that also come alongside it. Like any transition in how we do things online, AI chatbots introduce many new possibilities for the cybercriminals that use them too.

Educating people on the specific threats at play is key to avoiding attacks. If users know exactly how hackers could be targeting them, then they will be better able to ward them off.

Click below to share this article

Browse our latest issue

Intelligent Tech Channels

View Magazine Archive