Please fill in the form below to subscribe to our blog

Is AI the New Frontline in Cybersecurity or Just Hype?

November 12, 2024

The AI revolution is here. Cybercriminals are all-in, leveraging advances in AI technology to mount more sophisticated cyberattacks at a faster pace than ever before. While advocates call it a game changer for detecting threats faster and more accurately, critics question if its potential is overhyped, given its current limitations and evolving cybercriminal tactics. As cybercriminals increasingly use AI for more sophisticated attacks, the question remains: Are defenders ready to step into a new AI-driven future?


See the cybersecurity challenges that IT professionals faced in 2024, including the rise of AI and what’s next. DOWNLOAD IT>>



Cybercriminals haven’t hesitated to harness AI to enhance their schemes. The most popular uses of AI in cybercrime focus on exploiting automation, enhancing attack efficiency and evading detection. Understanding how threat actors are utilizing AI can help defenders get a sense of the value of AI-enabled cybersecurity solutions. Here are six ways cybercriminals are leveraging AI to cause trouble for businesses:

Automated phishing and spear phishing

AI-powered tools help craft convincing phishing emails by analyzing social media, public records and online presence to personalize messages. These attacks are highly targeted, making it easier to deceive victims into clicking malicious links or providing sensitive information. AI is turning out to be an especially useful tool for threat actors conducting business email compromise (BEC) schemes. AI-generated emails accounted for 40% of all BEC attacks in the second quarter of 2024, up 20% over the same period in 2023.

Deepfake and synthetic media attacks

Deepfakes are an especially insidious problem. AI-enhanced tools make it easy for bad actors to create realistic images, videos or voice clips that impersonate individuals. Research published by UK bank Starling Bank shows that 28% of UK adults believe they have been the target of an AI voice cloning scam within the past year. Deepfakes are often used in business email compromise (BEC) scams, where attackers might impersonate a CEO or other executive to trick employees into transferring funds.


IDA-GRP-Blog-Image-May

Take a deep dive into why an AI-powered anti-phishing solution is a smart financial choice. GET EBOOK>>


Malware development and polymorphic malware

AI enhances malware with self-modifying capabilities. Polymorphic malware can alter its code slightly every time it runs, making it harder to detect by traditional antivirus solutions. AI helps malware learn from prior detection patterns, further evading defenses. AI is also empowering bad actors to create new malware at an unprecedented rate, with 75% of security professionals reporting a surge in attacks that most attribute to the rise of generative AI.

Credential stuffing and brute force attacks

AI models can quickly sift through massive datasets of stolen credentials, automating the process of testing them across various services. This speeds up attacks and improves the likelihood of accessing accounts protected by weak or reused passwords. AI is an incredibly dangerous tool for password theft. In a terrifying example, a new AI model had no difficulty learning to hear keystrokes on a laptop transmitted over a smartphone, enabling the model to steal passwords with 95% accuracy.

Automated botnets, API and DDoS attacks

Bad actors were quick to leverage AI to power up botnets. Last year, 30% of all Application Programming Interface (API) attacks were driven by automated threats, with 17% specifically tied to bots. This use of AI by bad actors is growing exponentially and is expected to be a top danger in 2024. AI-driven botnets have also been a major boon for bad actors engaging in Demonstrated Denial of Service attacks, which surged by 106% from H2 2023 to H1 2024.

Business logic abuse

Business logic abuse refers to cyberattacks or fraud that exploit the intended functions and workflows of an application or system. Instead of targeting technical vulnerabilities, this type of attack is designed to take advantage of weaknesses in the way a system is designed to handle specific business processes. Business logic abuse is the most common AI-driven cyberattack used against retailers, accounting for about 30% of AI-powered attacks in the past year.


GRA CARTOON LITTLE FISHES AT ON HOOK BLACK BLUE

Learn how to minimize phishing risk with AI & automation in The Anti-phishing Email Security Buyer’s Guide GET IT>>



As with any hot topic, IT professionals have widely varying opinions on the expected impact of AI on cybersecurity. Our survey respondents showed a generally positive outlook toward the role of AI in enhancing business security, with more than half of the respondents saying they believe that AI will help them be more secure. However, doubt remains. Almost one-third of the IT professionals surveyed said they are uncertain about the impact AI may have on their company’s security. This split in perspectives highlights the need for more education and clarity around the benefits and limitations of AI in cybersecurity.

Do you believe AI will help you be more secure?

Yes53%
No13%
I don’t know34%

Source: Kaseya


Learn how to identify and mitigate malicious and accidental insider threats before there’s trouble! GET EBOOK>>



To explore how AI is impacting IT professionals and transforming cybersecurity, we asked our survey respondents to voice their sentiments about the impact of AI. Here’s what they had to say:

What impact do you believe AI will have on your organization?

  • AI helps improve productivity.
  • AI can provide quicker, more accurate data analysis.
  • Our anti-malware solution uses predictive AI to evaluate threats.
  • It will reduce human error.
  • AI can offer better threat detection by analyzing successful attacks on our peers and determining if we have similar vulnerabilities.

How do you believe AI will benefit bad actors?

  • AI helps make more believable phishing emails.
  • It can be used to automate cyberattacks, making them more difficult to detect and respond to.
  • Hackers can use AI to find vulnerabilities in systems or personalize phishing attacks.
  • AI will provide more of the target’s data at a quicker pace. This will help them orchestrate a more robust and convincing attack.
  • It will open more doors to social engineering attacks and make them seem even more realistic.

Source: Kaseya


AI phishing represented by a robotic face behind several conversation bubbles

See why choosing a smarter SOC is a smart business decision. DOWNLOAD AN EBOOK>>



As AI-driven cybercrime advances, it’s becoming critical for organizations to adopt equally sophisticated countermeasures, including AI-based cybersecurity solutions.

BullPhish ID – This effective, automated security awareness training and phishing simulation solution provides critical training that improves compliance, prevents employee mistakes and reduces a company’s risk of being hit by a cyberattack.     

Dark Web ID – Our award-winning dark web monitoring solution is the channel leader for good reason. It provides the greatest amount of protection around with 24/7/365 human- and machine-powered monitoring of business and personal credentials, including domains, IP addresses and email addresses.    

Graphus – This automated anti-phishing solution uses AI and a patented algorithm to catch and quarantine dangerous messages. It learns from every organization’s unique communication patterns to continuously tailor protection without human intervention. Best of all, it deploys in minutes to defend businesses from phishing and email-based cyberattacks, including zero day, AI-enhanced and novel threats.  


Book a demo of BullPhish ID, Dark Web ID and Graphus. BOOK IT>>


Read our case studies and see how MSPs and businesses have benefitted from using our solutions. READ NOW>