The leading US media CNBC has recently reported an enormous increase in malicious attacks enabled by generative AI tools similar to Chat GPT. Read more in the excerpts from their article below or access the whole text of the article here.

Since the fourth quarter of 2022, there’s been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular, according to a new report by cybersecurity firm SlashNext.

The report, based on the company’s threat intelligence and a survey of more than 300 North American cybersecurity professionals, notes that cybercriminals are leveraging generative artificial intelligence tools such as ChatGPT to help write sophisticated, targeted business email compromise (BEC) and other phishing messages.

“These findings solidify the concerns over the use of generative AI contributing to an exponential growth of phishing,” said Patrick Harr, CEO of SlashNext. “AI technology enables threat actors to increase the speed and variation of their attacks by modifying code in malware or creating thousands of variations of social engineering attacks to increase the probability of success.”

The report findings highlight just how rapidly AI-based threats are growing, especially in their speed, volume, and sophistication, Harr said.

 

 

Billions of dollars in losses

Another reason for such a high increase in phishing attacks is because they are working, Harr said. He cited the FBI’s Internet Crime Report, which said BEC alone accounted for about $2.7 billion in losses in 2022 and another $52 million in losses from other types of phishing.

“With rewards like this, cybercriminals are increasingly doubling down on phishing and BEC attempts,” Harr said.

While there has been some debate about the true influence of generative AI on cybercriminal activity, “we know from our research that threat actors are leveraging tools like ChatGPT to deliver fast-moving cyber threats and to help write sophisticated, targeted [BEC] and other phishing messages,” Harr said. 

For example, in July, SlashNext researchers discovered a BEC that used ChatGPT and a cybercrime tool called WormGPT, “which presents itself as a black hat alternative to GPT models, designed specifically for malicious activities such as creating and launching BEC attacks,” Harr said.

After the emergence of WormGPT, reports started circulating about another malicious chatbot called FraudGPT, Harr said. “This bot was marketed as an ‘exclusive’ tool tailored for fraudsters, hackers, spammers, and similar individuals, boasting an extensive list of features,” he said.

Another grave development that SlashNext researchers discovered involves the threat of AI “jailbreaks,” in which hackers cleverly remove the guardrails for the legal use of gen AI chatbots. In this way, attackers can turn tools such as ChatGPT into weapons that trick victims into giving away personal data or login credentials, which can lead to further damaging incursions.   

“Cyber criminals are leveraging generative AI tools like ChatGPT and other natural language processing models to generate more convincing phishing messages,” including BEC attacks, said Chris Steffen, research director at analyst and consulting firm Enterprise Management Associates.

“Gone are the days of the ‘Prince of Nigeria’ emails that presented broken, nearly unreadable English to try to convince would-be victims to send their life savings,” Steffen said. “Instead, the emails are extremely convincing and legitimate sounding, often mimicking the styles of those that the bad guys are impersonating, or in the same vein as official correspondence from trusted sources,” such as government agencies and financial services providers.

“They can use AI to analyze past writings and other publicly available information to make their emails extremely convincing,” Steffen said.

For example, a cybercriminal might use AI to generate an email to a specific employee, posing as the individual’s boss or supervisor and referencing a company event or a relevant personal detail, making the email seem authentic and trustworthy.

Cybersecurity leaders can take a number of steps to counteract and respond to the increased attacks, Steffen said. For one, they can provide continuous end-user education and training.

“Cybersecurity professionals need to make [users] constantly aware of this threat; a simple one-time reminder is not going to accomplish this goal,” Steffen said. “They need to be building on these trainings and establish a security awareness culture within their environment, one where the end users view security as a business priority, and feel comfortable reporting suspicious emails and security related activities.”

Another good practice is to implement email filtering tools that use machine learning and AI to detect and block phishing emails. “These solutions need to be constantly updated and tuned to protect against constantly evolving threats and updates to AI technologies,” Steffen said.

Organizations also need to conduct regular testing and security audits of systems that can be exploited. “They need to test to identify vulnerabilities and weaknesses in the organization’s defenses — as well as with employee training — while addressing known issues promptly to reduce the attack surface,” Steffen said.

Finally, companies need to implement or enhance their existing security infrastructure as needed. “No solution is likely to catch all AI-generated email attacks, so cybersecurity professionals need to have layered defenses and compensating controls to overcome initial breaches,” Steffen said. “Adopting a zero trust strategy [can] mitigate many of these control gaps, and offer defense-in-depth for most organizations.”

___

If this information is helpful to you read our blog for more interesting and useful content, tips, and guidelines on similar topics. Contact the team of COMPUTER 2000 Bulgaria now if you have a specific question. Our specialists will be assisting you with your query. 

Content curated by the team of COMPUTER 2000 on the basis of news in reputable media and marketing materials provided by our partners, companies, and other vendors.

 

 

 

Follow us to learn more

CONTACT US

Let’s walk through the journey of digital transformation together.

By clicking on the SEND button you agree to the processing of personal data. In accordance with our Privacy Policy

15 + 9 =