Thursday, January 16, 2025
No menu items!

“Generative AI Adds Complexity to Cybersecurity,” Says Trend Micro & Forrester

Must Read

The AI landscape is developing quicker than ever, and this is especially true for cybersecurity. Many South African enterprises are now focusing their investments on application security, with a strong emphasis on AI-powered solutions to enhance their cybersecurity efforts. This pattern demonstrates an increasing awareness of AI’s potential benefits, even as the technology itself advances.

David Roth, Chief Revenue Officer at Trend Micro, and Jeff Pollard, Vice President and Principal Analyst at Forrester, hosted a webinar to clarify the hype surrounding AI and machine learning in security strategies, emphasizing that generative AI (Gen AI) introduces a new layer of complexity.


Here are some 5 key points for consideration highlighted during the webinar:

  1. Be aware of the impact on skills

The appeal of a’more sophisticated new toy’ isn’t the only thing driving interest in generative AI for cybersecurity. Security teams are in severe need of assistance—they are overburdened, understaffed, and coping with continuously shifting threats. So it’s no surprise that when Gen AI was on the scene, people began fantasizing about fully autonomous security operations centers (SOCs) staffed by Terminator-like malware hunters.

However, today’s Gen AI systems are not yet ready to operate autonomously. Instead of addressing the skills gap, Gen AI may introduce additional training issues in the medium run. Furthermore, integrating these AI tools into existing workflows takes time, even for experienced workers.

Despite these hurdles, there are some really promising uses for Gen AI in security right now. By enhancing what teams can already do, AI can help them achieve better results with less repetitive work. This is especially true in areas like application development and detection and response.

  1. Understand how to achieve quick wins

Gen AI is revolutionizing security teams by automating documentation tasks like action summaries, event write-ups, and reports, allowing security professionals to focus on incidents, reducing the time-consuming and tedious process.

However, strong communication skills are still required for these positions, and AI-generated reports should not be used to replace professional development.

Gen AI can also advise future steps and get information from knowledge bases more quickly than humans. However, it is critical that AI outputs meet the needs of the enterprise. If a procedure requires seven steps and AI recommends only four, a person must guarantee that all steps are completed in order to satisfy objectives and remain compliant. Skipping stages can lead to catastrophic consequences.

  1. Look out for gaps in data, impacting AI performance

Security companies may take advantage of the big data opportunity by using Gen AI to become more proactive, identify changes in attack surfaces, and execute attack path simulations. It can assist teams in staying ahead of possible problems, even though it might not be able to accurately foresee dangers.

However, an organization’s awareness of its systems and configurations determines how well this works. Knowledge gaps result in AI performance gaps, and regrettably, many organizations continue to face challenges due to dispersed data and documentation.

Standardized data management and proper data hygiene must be the main priorities of security teams.

4. Introduce safety measures for shadow AI

Businesses globally are rightly worried about AI leaking sensitive information, whether through unauthorized tools or even approved software that’s been enhanced with AI. In the past, hackers needed to know how to break into systems to get this data, but now, a simple prompt could make it accessible.

Companies must safeguard themselves from unauthorized AI use and ensure proper use of approved tools, particularly when developing applications using large language models (LLMs), by securing data, apps, LLMs, and prompts.

These concerns boil down to a few main issues: bring-your-own AI, enterprise apps, and product security. All require their own safety measures and affect the Chief Information Security Officer’s (CISO) responsibilities, even if the CISO isn’t directly managing these projects.

  1. Don’t get caught unprepared

Consider the early days of cloud computing and the hysteria surrounding shadow IT apps—there’s a lot to learn from those times. When security professionals labeled unauthorized programs as “shadow IT,” business leaders referred to it as “product-led growth.” Banning them just pushed their use underground, worsening the situation.

Now is the time to develop security-focused AI plans, become acquainted with the technology, and prepare for its big moment. Remember how the cloud caught security professionals off guard, despite ample warning? Given the complexity and power of AI, we simply cannot afford to be unprepared this time.

Gen AI is gaining momentum in cybersecurity, but it won’t immediately solve the skills gap. By learning from past experiences with shadow IT and cloud adoption, teams can better prepare for AI’s transformative future. Preparation and proactive management are crucial to harness AI’s true power and keep enterprises competitive.

- Advertisement -

Google & Mercedes-Benz Transform In-Car Navigation with AI

Google and Mercedes-Benz have announced a pioneering partnership to expand the MBUX Virtual Assistant with AI-powered conversational search capabilities. Built...
Latest News
- Advertisement -

More Articles Like This

- Advertisement -