Certainly, the cyber threat landscape is deteriorating by the minute, and learning about new attacks and the best way to keep your company safe is vital. For instance, AI-powered attack methods, such as voice cloning and deepfakes, maximize hackers’ success rates in phishing and social engineering incidents. Worse still, the increased access to generative AI products like ChatGPT further democratizes cybercrime. Understanding AI-powered attacks will help you keep up with cybercriminals’ innovation pace and secure your systems and data.
Hackers use AI to increase their winning rates.
As AI innovation advances, threat actors have realized they can use the technology for different attacks to increase their success rates and maximize profits.
Some of the ways hackers use AI-powered attacks include the following:
1. Voice cloning
Hackers leverage AI in vishing (voice phishing) attacks to dupe unsuspecting users and employees into believing they are speaking with legitimate callers.
Sometimes, threat actors use these AI-powered calls with other tactics, such as business email compromise attacks. For instance, they can call victims to give them a heads-up about an email (in this case, a phishing email) they are about to receive. This strategy increases the hacker’s success rate since gullible users will not identify the email as harmful.
2. Deepfake technologies
Besides voice manipulation, hackers have weaponized AI by altering video material to conduct plausible social engineering attacks.
Deepfakes misuse has eroded trust in body cameras, surveillance footage, and other video and audio evidence. Additionally, these AI-powered attacks have increased cases of cyberbullying, stock manipulation, and blackmailing and worsened political instability.
3. AI-powered phishing emails
Hackers can use generative AI, such as the ChatGPT tool, to craft convincing phishing emails that bypass conventional spam filters. With publicly available and free generative AI solutions, cybercriminals and malicious insiders will generate convincing emails and code with little technical expertise.
Zias Nasr from Acronis notes that AI and machine learning used by cybercriminals to create phishing emails and malware reduce barriers to entering the cybercrime space and increase attack frequency.
Previously, attackers have been limited in their ability to send phishing emails to victims in the UAE since many of them don’t write in Arabic.
“However, with generative AI models, attackers can generate well-written, seemingly trustworthy phishing emails and messages in various languages at the click of a button,” states cybersecurity expert Safwan Akram. Translating phishing text into different languages localizes the attacks and increases trust levels.
ChatGPT’s classic large language model (LLM) is versatile enough to create realistic phishing emails. A recent cyber threats analyst report states that these AI tools can generate hundreds of slightly different messages, making traditional static detection difficult.
4. Scaling attacks with minimum efforts
Apart from generating different phishing emails rapidly, AI can even respond to potential questions and email responses from unsuspecting victims, greatly reducing the attack time and effort. Generative AI tools can generate scripts for sending and responding to emails while recognizing the topics that work well and the ones to avoid.
5. AI-powered malware
AI innovations make malware creation easier for cybercriminals. Threat actors can use AI to create sophisticated polymorphic malware that can metamorphosize (change the design and code; the malware rewrites itself after infection) to evade conventional security mechanisms. The process that initially took hours to complete now takes a few minutes with generative AI tools.
6. Discovering vulnerabilities
Generative AI models can understand and identify flaws in program code. In this case, threat actors can paste software’s source code into AI-powered solutions to detect vulnerabilities such as SQL injection, buffer overflow, missing authentication, and unrestricted uploads.
Next, the AI chatbot can create corresponding exploits and obfuscate the attack methods.
Staying ahead of AI-Powered Attacks
1. AI-Powered cyber defense
Just like with emerging cyber threats, security experts should adopt advanced technical and administrative solutions to keep up with criminals’ innovations. For instance, security teams can use AI-powered tools for threat intelligence and cyber risk assessments.
2. Defense-in-Depth model
Organizations using cloud environments should invest in multiple security layers and a zero-trust model based on enhancing access controls.
3. Patch management and WAF
You should use updated tools to assess and detect vulnerabilities in software and other solutions in your IT environment. Common security measures such as patch management and web application firewall filtering can detect and protect your assets from emerging vulnerabilities.
4. Cybersecurity awareness training
Additionally, organizations should continuously create awareness about new AI-powered attacks. Users need to know how to detect red flags in emails, such as typos and malicious links. Businesses in the UAE should also equip employees with smart support solutions to detect and respond to attacks.
Technology is advancing at breakneck speed in the UAE, with many industries and processes going online, from retail to banking to oil and energy production. Also, AI adoption is on the rise. As AI gradually permeates our everyday lives, cybercriminals are not hesitant to take advantage of this technological innovation. While security teams use AI for defensive purposes and threat intelligence, it is undisputable the technology has complicated the cybersecurity landscape. Therefore, security teams should consider deploying AI-powered defense mechanisms in addition to standard defense-in-depth controls and awareness training.