Pål (Paul) has more than 30 years of experience in the IT industry and has worked with both domestic and international clients on a local and global scale. Pål has a very broad competence base that covers everything from general security, to datacenter security, to cloud security services and development. For the past 10 years, he has worked primarily within the private sector, with a focus on both large and medium-sized companies within most verticals.
In this expert interview, Check Point security expert Pål Aaserudseter describes where we are with ChatGPT and artificial intelligence. He delves into policy, processes, and more. Don’t miss this.
In the past year, what has caught your attention regarding AI and cyber security?
Hi and thanks for having me on CyberTalk! Looking back at 2023, I think the best word to describe it is wow!
As 2023 progressed, AI experienced huge developments, with breakthroughs in chatbots, large language models and in sectors like transportation, healthcare, content creation and too many others to mention!
We might say that ChatGPT was the on-ramp into AI for most people in 2023. Obviously, it evolved, got a lot of attention in the media for various reasons and now the the makers are trying to profit from it in different ways. Competition is also on the rise, with companies like Anthropic. We’ll see a lot more happening on the AI front in 2024.
When it comes to cyber security, we have seen a massive implementation of AI on both sides of the fence. It is now easier to become a cyber criminal than ever before, as AI-enabled tools are automated, easy to use and easy to rent (as-a-service).
One example is DarkGemini. It’s a powerful GenAI chatbot, being sold on the dark web for a monthly subscription. It can create malware, build a reverse shell, and do other bad things, solely based on a text prompt, and it will surely be further developed to introduce more features that attackers can leverage.
When wielded maliciously, AI becomes a catalyst for chaos. From the creation of deep fakes to intricate social engineering schemes, – like much more convincing phishing attempts and polymorphic malware resulting in continuously mutating threat code variants – these things pose a formidable challenge to current security tools.
Consequently, the balance of power may tip in favor of attackers, as traditional defense mechanisms struggle to adapt and counter these evolving threats.
Cyber attackers leveraging AI have the capacity to automate and quickly identify vulnerabilities for exploitation. Unlike current generic attacks, AI enables attackers to tailor their assaults to specific targets and scenarios, potentially leading to a surge in personalized and precisely targeted attacks. As the scale and precision of such attacks increase, it’s likely that we’ll witness a shift in attacker behaviors and strategies.
Implementing AI-based security that learns, adapts and improves, is critical in future-proofing against unknown attacks.
What new challenges and opportunities are you seeing? What has your experience working with clients been like?
New challenges in AI and cyber security include addressing the ethical implications of AI-driven security systems, ensuring the reliability and transparency of AI algorithms, and staying ahead of evolving cyber threats.
Regulation is important, and with the EU AI Act and AI Alliance, we are taking steps forward, but as of now, the laws are still miles behind AI development.
There are also opportunities to leverage AI for proactive threat hunting, automated incident response, and predictive analytics to better protect against cyber attacks.
Working with clients has involved assisting them in understanding the capabilities and limitations of AI in cyber security (and other areas) and helping them integrate AI-powered solutions effectively into their security strategies.
Have there been any new developments around ethical guidelines/standards for the ethical use of AI within cyber security?
Yes! Efforts to establish guidelines and standards for the ethical use of AI within cyber security are ongoing and gaining traction. Organizations such as IEEE and NIST are developing frameworks to promote responsible AI practices in cyber security, focusing on transparency, fairness, accountability, and privacy.
As mentioned, the AI Alliance is comprised of technology creators, developers and adopters working together to advance safe and responsible AI.
Also, to regulate the safe use of AI, the first parts of the very important AI Act have been passed in the European Union.
As a cyber security expert, what are your perspectives around the ethical use of AI within cyber security? How can organizations ensure transparency? How can they ensure that the AI isn’t manipulated by threat actors?
My perspectives on the ethical use of AI within cyber security (and all other fields for that matter) are rooted in the principles of transparency, fairness, accountability, and privacy.
While AI holds immense potential to bolster cyber security defenses and mitigate threats, it’s crucial to ensure that its deployment aligns with ethical considerations.
Transparency is key. Organizations must be transparent about how AI algorithms are developed, trained, and utilized in cyber security operations. This transparency fosters trust among stakeholders and enables scrutiny of AI systems.
Fairness is essential to prevent discrimination or bias in AI-driven decision-making processes. It’s imperative to address algorithmic biases that may perpetuate inequalities or disadvantage certain groups. Thoughtful design, rigorous testing, and ongoing monitoring are necessary to ensure fairness in AI applications.
Note: You can compare training an AI model as to raising a child into a responsible adult. It needs guidance and fostering and needs to learn from its mistakes along the way in order to become responsible and make the right decisions in the end.
Accountability is crucial for holding individuals and organizations responsible for the actions and decisions made by AI systems. Clear lines of accountability should be established to identify who is accountable for AI-related outcomes, including any errors or failures.
Accountability encourages responsible behavior and incentivizes adherence to ethical standards.
Privacy must be protected when using AI in cyber security. Organizations should prioritize the confidentiality and integrity of sensitive data, implementing robust security measures to prevent unauthorized access or misuse. AI algorithms should be designed with privacy-enhancing techniques to minimize the risk of data breaches or privacy violations. Their design should also take things like GDPR and PII into account.
Overall, ethical considerations should guide the development, deployment, and governance of AI in cyber security (and other fields leveraging AI).
What are the implications of the new Check Point partnership with NVIDIA in relation to securing AI (cloud) infrastructure at-scale?
This shows the importance of securing such platforms, as cyber criminals will obviously try to exploit any new technology. With the immense speed of development on AI, there are going to be errors, mistakes, code and prompts that can be compromised. At Check Point, we have the solutions to secure your AI! Learn more here.