Share This:

From our sponsor

AIRecent studies by Canalys and Channel Futures project managed service provider (MSP) revenue to grow 11% or more in 2024. The Channel Futures study reveals that 62% of MSPs increased their artificial intelligence (AI) deployments and consultations in the fourth quarter of 2023.

Technology agents/advisors (TAs) are also showing an increase of generative AI (GenAI) in their business processes. TAs are professionals who offer consulting or other technical services without offering managed services. The top use cases for GenAI in this group were sales and marketing (48%), social media posts (38%), education and research (32%), and email (32%). You can see the details here.

Barracuda has researched the use of AI in cybersecurity, and we’ve written about it extensively in eBooks and on our blog. Our colleague Neal Bradbury has also been raising awareness around the promise of AI, most recently in this Channel Futures article.

Key takeaways

GenAI is a time-saving tool for security teams because it automates routine tasks that team members do not need to perform manually. Using GenAI this way can make processes more efficient and improve employee satisfaction and retention. Offloading mundane tasks to GenAI allows employees to spend time on more strategic initiatives that may be more fulfilling to a security professional.

AI is a powerful ally in email security. Machine learning (ML) and AI models can learn the messaging patterns of the business and monitor each message for anomalies. Several types of AI work together to classify emails, understand the language used in messages, and act on deviations from standard behavior patterns. Barracuda Email Protection uses various AI technologies to defend against everything from spam to advanced threats and zero-day attacks. See this post for details on how modern email security uses several subtypes of AI.

AI is the main character in the evolving threat landscape. Neal cites a 2023 Internet Crime Report (p.7) that shows phishing to be the overwhelmingly dominant attack type, contributing to total losses of over $12.5 billion. Threat actors use phishing attacks to trick people into installing malware or revealing login credentials and other sensitive information. Phishing emerged as an attack type in 1995, though it was restricted to an America Online (AOL) attack. Modern phishing attacks can be conducted by low-skilled threat actors who purchase access to any of the Phishing-as-a-Service (PhaaS) platforms in the cybercrime ecosystem. PhaaS operators provide a fully developed infrastructure and software kit, and the user launches the attacks. With the help of GenAI, threat actors can develop attacks that appear professionally written and local to the region. Other AI technologies help the threat actor accelerate the phishing attacks.

In response to AI-enhanced threats, IT teams must deploy AI-enhanced cybersecurity. Email protection, application securitythreat intelligence, and many more security-related functions are stronger, better, and faster with AI support. Threat intelligence and signal sharing between cybersecurity vendors elevate the ability of the security industry to stop advanced threats.

There is an urgent need for investment in AI security. Studies show a potentially large gap between AI-enhanced threats and AI-enhanced security, which is a concern. One study revealed that “75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.” Forrester Research reports that only 39% of professionals believe their security infrastructure can defend against AI-powered threats. Numbers vary between studies, but most show that defensive AI is struggling to catch up to malicious AI.

Regulatory environments are a challenge to AI adoption. Data protection and privacy laws are a concern to the majority of decision-makers surveyed in this research. India’s Digital Personal Data Protection Act (DPDPA) and the European Union’s General Data Protection Regulation (GDPR) are examples of laws requiring careful consideration. For example, both GDPR and DPDPA require the following:

  • Data privacy and consent: companies must obtain explicit consent and be transparent about data use.
  • Data minimization and purpose limitation: Only the necessary data can be collected and used for specific purposes.
  • Accountability: Companies must demonstrate compliance and explain how AI models make decisions.

There are many more requirements depending on the regulatory environment, and all of these require thorough planning that inevitably slows the adoption of AI technologies.

Neal’s article can be found here: How CISOs Can Leverage Generative AI to Improve Email, Application Security. If you’d like to dig into the topic, you can view this free webinar on demand: CISOs, AI, and cybersecurity: Insights from Barracuda and a guest speaker from Forrester. This webinar is hosted by Neal and features Jess Burn of Forrester.

Barracuda’s 20+ year history includes several AI innovations, all leading to our comprehensive AI-enhanced security solutions that defend every threat vector. You can schedule a demo of our AI cybersecurity solutions here.

If you’d like to read more about AI, check out these resources:

This article was originally published at Barracuda Blog.

Photo: Sichon / Shutterstock


Share This:
Christine Barry

Posted by Christine Barry

Christine Barry is Senior Chief Blogger and Social Media Manager at Barracuda. In this role, she helps bring Barracuda stories to life and facilitate communication between the public and Barracuda internal teams. Prior to joining Barracuda, Christine was a field engineer and project manager for K12 and SMB clients for over 15 years. She holds several technology credentials, a Bachelor of Arts, and a Master of Business Administration. She is a graduate of the University of Michigan.

Leave a reply

Your email address will not be published. Required fields are marked *