Massive Surge in Harmful Phishing Emails Because of AI Tools like ChatGPT

Home Technology Massive Surge in Harmful Phishing Emails Because of AI Tools like ChatGPT
Massive Surge in Harmful Phishing Emails

According to a new analysis by cybersecurity firm SlashNext, there has been a 1,265% increase in malicious phishing emails since the fourth quarter of 2022, with credential phishing seeing the biggest spike at 967%.

According to the report, which is based on threat intelligence from the company and a survey of over 300 cybersecurity experts in North America, cybercriminals are using ChatGPT and other generative artificial intelligence tools to help write advanced, targeted business email compromises (BECs) and other phishing messages.

The study found that 31,000 phishing assaults were sent out every day on average. 77% of cybersecurity professionals who responded to the study said they had been the target of phishing attempts, and nearly half said they had experienced a BEC attack.

SlashNext CEO Patrick Harr stated that these results solidify the worries over the use of artificial intelligence that are leading to an exponential growth of phishing. Threat actors can use AI technology to create thousands of different social engineering attack versions or tweak malware code to boost the speed and variety of their attacks and raise the likelihood of success.

According to Harr, the report’s findings demonstrate how quickly AI-based threats are developing, particularly in terms of their volume, speed, and sophistication.

According to Harr, the timing of ChatGPT’s launch at the end of last year and the exponential rise in harmful phishing emails is not coincidental. The entry barrier for inexperienced malefactors has been considerably decreased by generative AI chatbots, which have also given more proficient and seasoned attackers the means to carry out large-scale, targeted spear-phishing attacks.

According to Harr, another explanation for the sharp rise in phishing attempts is their effectiveness. He referenced the FBI’s Internet Crime Report, which stated that losses from BEC alone in 2022 were over $2.7 billion, with further losses from various forms of phishing coming to $52 million.

Massive Surge in Harmful Phishing Emails Because of AI Tools like ChatGPT
According to a new analysis by cybersecurity firm SlashNext, there has been a 1,265% increase in malicious phishing

Cybercriminals are stepping up their phishing and BEC attempts more and more in response to incentives like this, according to Harr.

Although the exact impact of generative AI on cybercrime has been called into question, our study has shown that threat actors are using ChatGPT and other similar tools to create complex, targeted [BEC] and other phishing messages and deliver swift cyber threats. said, Harr. 

For instance, in July, WormGPT, a cybercrime tool marketed as a black-hat substitute for GPT models and made expressly for malevolent purposes like building and executing BEC attacks, was found by SlashNext researchers to have been utilized by a BEC.

Following the release of WormGPT, rumors of FraudGPT—another dangerous chatbot—began to spread, according to Harr. According to him, this bot was advertised as an “exclusive” tool with a long list of functions designed for spammers, hackers, fraudsters, and other interested parties.

Researchers at SlashNext also found a worrying trend: the possibility of AI “jailbreaks,” in which hackers deftly lift the restrictions on the use of general AI chatbots for legitimate purposes. By using tools like ChatGPT, attackers can use them as weapons to deceive victims into disclosing personal information or login credentials, which can result in more destructive intrusions.  

AI Tools Create Phishing Email

According to Chris Steffen, research director at analyst and consulting firm Enterprise Management Associates, cybercriminals are using generative AI tools like ChatGPT and other natural language processing models to create more convincing phishing communications, including BEC assaults.

The days of sending practically unintelligible emails with the subject line “Prince of Nigeria” trying to persuade potential victims to give their life savings are long gone, according to Steffen. Rather, the emails appear incredibly authentic and convincing, either emulating the writing styles of the people the bad guys are impersonating or sounding like official correspondence from reliable sources, such as financial services firms and government agencies.

Massive Surge in Harmful Phishing Emails
According to Steffen, they can make their emails very convincing by using AI

According to Steffen, they can make their emails very convincing by using AI to examine previous publications and other material that is readily available to the public.

For instance, a cybercriminal might use AI to create an email addressed to a particular worker, pretending to be the worker’s manager or supervisor and bringing up a work-related occasion or a pertinent personal tidbit to make the email appear reliable and real.

According to Steffen, leaders in cybersecurity can take several actions to combat and react to the growing number of threats. They can offer ongoing end-user training and education, for starters.

According to Steffen, cybersecurity experts must regularly alert [users] of this hazard; a one-time notice alone will not do this. They must expand on these trainings and create a culture of security awareness within their organization so that end users see security as a business priority and are at ease reporting shady emails and other security-related activity.

Using email filtering software that employs AI and machine learning to identify and stop phishing emails is another wise move. According to Steffen, these solutions must be updated and adjusted frequently to guard against ever-evolving dangers and advancements in AI technology.

Additionally, organizations must regularly test and audit the security of systems that are vulnerable to attack. In order to lessen the attack surface, Steffen added, they must test to find vulnerabilities and gaps in the organization’s defenses as well as with staff training. They also need to resolve recognized concerns quickly.

Lastly, businesses must upgrade or implement their current security infrastructure as necessary. According to Steffen, cybersecurity experts must have layered defenses and compensating controls in place to overcome early breaches, as no solution is going to be able to stop all AI-generated email attacks. Many of these control holes can be closed by implementing a zero-trust strategy, which provides defense in depth for the majority of enterprises.

Also Read: Larry Summers, 68, Joining OpenAI’s Board As Sam Altman Returns

Leave a Reply

Your email address will not be published.