10 ways generative AI drives stronger security outcomes – CyberTalk

EXECUTIVE SUMMARY:

Eighty-seven percent of cyber security professionals recognize the potential inherent in leveraging AI for security purposes. The growing volume and sophistication of cyber attacks point to the critical need for new and innovative ways to protect businesses from cyber skullduggery.

However, despite widespread and rabid enthusiasm for generative AI, generative AI adoption in the security space has remained somewhat constricted and slow. Why? The reality is that running mature, enterprise-ready generative AI is not an easy feat.

Managing generative AI systems requires skilled professionals, comprehensive governance structures and powerful infrastructure, among other things. Nonetheless, if organizational maturity is accounted for and attended to, generative AI can present robust opportunities through which to drive stronger cyber security outcomes.

10 ways generative AI drives stronger cyber security outcomes

1. Customized threat scenarios. When presented with news articles detailing a never-seen-before threat scenario, generative AI can process the information in such a way as to create a customized tabletop exercise.

When also given organization-specific information, the technology can generate tabletop scenarios that closely align with an organization’s interests and general risk profile. Thus, the AI can strengthen organizational abilities to plan for and contend with emerging cyber threats.

2. Persona-based risk assessment. When joining a new organization, cyber security leaders commonly connect with stakeholders in order to understand department-specific cyber risks.

This has effort its benefits, but only to an extent. Cyber security personnel can only reach out to high-level stakeholders and departmental heads for input so many times, at least, before seriously detracting from their work.

To the advantage of cyber security professionals, when set up to do so, generative AI can emulate various personas. If this sounds absurd, just hang in there. As a result, the AI can simulate different perspectives and evaluate risk scenarios accordingly.

For example, an AI model that emulates a cautious CFO may be able to provide security staff with insights into financial data security risks that would have otherwise remained overlooked. While new and still somewhat eerie, persona emulation can prompt businesses to examine more elusive risk types and to consider corresponding red teaming activities.

3. Dynamic honeypots. Honeypots decoy systems are designed to strategically misdirect hackers who are looking for high-value data. In essence, they send the hackers hunting in the wrong direction (so that security pros can find them and send them packing).

Generative AI can enhance the effectiveness of honeypot traps by dynamically creating new and different fake environments. This can help protect a given organization’s resources, as it helps to continuously confound and redirect hackers.

4. Policy development and optimization. Generative AI has the ability to analyze historical security incidents, regulations and organizational goals. As a result, it can recommend (or even autonomously develop) cyber security policies. Said policies can be tailored to align with business objectives, compliance requirements and a cyber security strategy.

(However, despite the utility of generative AI in this area, regular policy validation and human oversight are still critical.)

5. Malware detection. When it comes to malware detection, generative AI algorithms excel. They can closely monitor patterns, understand behaviors and zero in on anomalies.

Generative AI can detect new malware strains, including those that deploy unique self-evolving techniques and polymorphic code.

6. Secure code generation. Generative AI can assist with writing secure code. Generative AI tools can review existing codebases, find vulnerabilities and recommend patches or improvements.

Refusing to use generative AI for secure code development would be like “asking an office worker to use a typewriter instead of a computer,” says Albert Ziegler, principle researcher and member of the GitHub Next research and development team.

In terms of examples of what generative AI can do here, it can automatically refactor code to eliminate common security flaws and issues, like SQL injections or buffer overflows.

7. Privacy-preserving data synthesis. According to ArXiv, owned by Cornell University, generative AI’s abilities to create task-specific, synthetic training data has positive implications for privacy and cyber security.

For instance, generative AI can anonymize medical data, enabling researchers to study the material without the risk of accidentally exposing real data through insecure tools (or in some other way, compromising patient privacy).

8. Vulnerability prediction and prioritization. Generative AI and machine learning tools can assist with vulnerability management by analyzing existing databases, software code patterns, network configurations and threat intelligence. Organizations can then predict potential vulnerabilities in software (or network configurations) ahead of when they would otherwise be discovered.

9. Fraud detection. One novel application of generative AI is in fraud detection, as the technology can sift through massive datasets (nearly instantly). Thus, generative AI can flag and block suspicious online transactions as they pop-up, preventing possible economic losses.

PayPal is known to have already applied generative AI and ML to enhance its fraud detection capabilities. Over a three year period, this application of generative AI has reduced the company’s loss rate by half.

10. Social engineering countermeasures. The success of social engineering tactics, like phishing emails, depend on the manipulation of human emotions and the exploitation of trust. To combat phishing, generative AI can be used to develop realistic phishing simulations for the purpose of employee training.

Generative AI can also be used to develop deepfakes of known persons — for internal ethical use and training purposes only. Exposing employees to deepfakes in a controlled setting can help them become more adept at spotting deepfakes in the real-world.

Explore how else generative AI can drive stronger cyber security outcomes for your organization. Read about how Check Point’s new generative AI-based technology can benefit your team. Click here.

To receive compelling cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.