Last-Minute Interview Prep: When Can It Actually Help You Land The Job? Answers From An Expert

You’ve just landed an interview for your dream job… tomorrow morning. Is last-minute prep worth it?

Career coach Sarah Chen says yes, focusing on high-impact areas can boost confidence. Quickly review the job description to tailor your answers, and identify key skills they seek. “Having those stories fresh in your mind can make a huge difference,” Chen advises.

Also, understand the company’s recent activities. “Mentioning something specific you learned… can demonstrate genuine interest,” she notes.

But, don’t cram. “Focus on understanding the big picture and identifying a few key talking points,” Chen says. Authenticity is crucial.

Last-minute preparation can sharpen your focus and help you present your best self, if you direct your limited time strategically.

Spamouflage’s advanced deceptive behavior reinforces need for strong email security

EXECUTIVE SUMMARY:

Ahead of the U.S. elections, adversaries are weaponizing social media to gain political sway. Russian and Iranian efforts have become increasingly aggressive and transparent. However, China appears to have taken a more carefully calculated and nuanced approach.

China’s seeming disinformation efforts have little to do with positioning one political candidate as preferable to another. Rather, the country’s maneuvers may aim to undermine trust in voting systems, elections and America, in general; amplifying criticism and sowing discord.

Spamouflage

In recent months, the Chinese disinformation network, known as Spamouflage, has pursued “advanced deceptive behavior.” It has quietly launched thousands of accounts across more than 50 domains, and used them to target people across the United States.

The group has been active since 2017, but has recently reinforced its efforts.

Fake profiles

The Spamouflage network’s fake online accounts present fake identities, which sometimes change on a whim. The accounts/profiles have been spotted on X, TikTok and elsewhere.

For example:

Harlan claimed to be a New York resident and an Army veteran, age 29. His profile picture showed a well-groomed young man. However, a few months later, his account shifted personas. Suddenly, Harlan appeared to be from Florida and a 31 year-old
Republican influencer. 

At least four different accounts were found to mimic Trump supporters – part of a tactic with the moniker “MAGAflage.”

The fake profiles, including the fake photos, may have been generated through artificial intelligence tools, according to analysts.

Accounts have exhibited certain patterns, using hashtags like #American, while presenting themselves as voters or groups that “love America” but feel alienated by political issues that range from women’s healthcare to Ukraine.

In June, one post on X read “Although I am American, I am extremely opposed to NATO and the behavior of the U.S. government in war. I think soldiers should protect their own country’s people and territory…should not initiate wars on their own…” The text was accompanied by an image showing NATO’s expansion across Europe.

Email security implications

Disinformation campaigns that create (and weaponize) fake profiles, as described above, will have a high degree of success when crafting and distributing phishing emails, as the emails will appear to come from credible sources.

This makes it essential for organizations to implement and for employees to adhere to advanced verification methods that can ensure the veracity of communications.

Advanced email security protocols

Within your organization, if you haven’t done so already, consider implementing the following:

  • Multi-factor authentication. Even if credentials are compromised via phishing, MFA can help protect against unauthorized account access.
  • Email authentication protocols. Technologies such as SPF, DKIM and DMARC can assist with verifying the legitimacy of email senders and spoofing prevention.
  • Advanced threat detection. Advanced threat detection solutions that are powered by AI and machine learning can enhance email traffic security.
  • Employee awareness. Remind employees to not only think before they click, but to also think before they link to information – whether in their professional roles or their personal lives.
  • Incident response plans. Most organizations have incident response plans. But are they routinely updated? Can they address disinformation and deepfake threats?

Further thoughts

To effectively counter threats, organizations need to pursue a dynamic, multi-dimensional approach. But it’s tough.

To get expert guidance, please visit our website or contact our experts. We’re here to help!

Generative AI adoption: Strategic implications & security concerns – CyberTalk

By Manuel Rodriguez. With more than 15 years of experience in cyber security, Manuel Rodriguez is currently the Security Engineering Manager for the North of Latin America at Check Point Software Technologies, where he leads a team of high-level professionals whose objective is to help organizations and businesses meet cyber security needs. Manuel joined Check Point in 2015 and initially worked as a Security Engineer, covering Central America, where he participated in the development of important projects for multiple clients in the region. He had previously served in leadership roles for various cyber security solution providers in Colombia.

Technology evolves very quickly. We often see innovations that are groundbreaking and have the potential to change the way we live and do business. Although artificial intelligence is not necessarily new, in November of 2022 ChatGPT was released, giving the general public access to a technology we know as Generative Artificial Intelligence (GenAI). It was in a short time from then to the point where people and organizations realized it could help them gain a competitive advantage.

Over the past year, organizational adoption of GenAI has nearly doubled, showing the growing interest in embracing this kind of technology. This surge isn’t a temporary trend; it is a clear indication of the impact GenAI is already having and that it will continue to have in the coming years across various industry sectors.

The surge in adoption

Recent data reveals that 65% of organizations are now regularly using generative AI, with overall AI adoption jumping to 72% this year. This rapid increase shows the growing recognition of GenAI’s potential to drive innovation and efficiency. One analyst firm predicts that by 2026, over 80% of enterprises will be utilizing GenAI APIs or applications, highlighting the importance that businesses are giving to integrating this technology into their strategic frameworks.

Building trust and addressing concerns

Although adoption is increasing very fast in organizations, the percentage of the workforce with access to this kind of technology still relatively low. In a recent survey by Deloitte, it was found that 46% of organizations provide approved Generative AI access to 20% or less of their workforce. When asked for the reason behind this, the main answer was around risk and reward. Aligned with that, 92% of business leaders see moderate to high-risk concerns with GenAI.

As organizations scale their GenAI deployments, concerns increase around data security, quality, and explainability. Addressing these issues is essential to generate confidence among stakeholders and ensure the responsible use of AI technologies.

Data security

The adoption of Generative AI (GenAI) in organizations comes with various data security risks. One of the primary concerns is the unauthorized use of GenAI tools, which can lead to data integrity issues and potential breaches. Shadow GenAI, where employees use unapproved GenAI applications, can lead to data leaks, privacy issues and compliance violations.

Clearly defining the GenAI policy in the organization and having appropriate visibility and control over the shared information through these applications will help organizations mitigate this risk and maintain compliance with security regulations. Additionally, real-time user coaching and training has proven effective in altering user actions and reducing data risks.

Compliance and regulations

Compliance with data privacy regulations is a critical aspect of GenAI adoption. Non-compliance can lead to significant legal and financial repercussions. Organizations must ensure that their GenAI tools and practices adhere to relevant regulations, such as GDPR, HIPPA, CCPA and others.

Visibility, monitoring and reporting are essential for compliance, as they provide the necessary oversight to ensure that GenAI applications are used appropriately. Unauthorized or improper use of GenAI tools can lead to regulatory breaches, making it imperative to have clear policies and governance structures in place. Intellectual property challenges also arise from generating infringing content, which can further complicate compliance efforts.

To address these challenges, organizations should establish a robust framework for GenAI governance. This includes developing a comprehensive AI ethics policy that defines acceptable use cases and categorizes data usage based on organizational roles and functions. Monitoring systems are essential for detecting unauthorized GenAI activities and ensuring compliance with regulations.

Specific regulations for GenAI

Several specific regulations and guidelines have been developed or are in the works to address the unique challenges posed by GenAI. Some of those are more focused on the development of new AI tools while others as the California GenAI Guidelines focused on purchase and use. Examples include:

EU AI Act: This landmark regulation aims to ensure the safe and trustworthy use of AI, including GenAI. It includes provisions for risk assessments, technical documentation standards, and bans on certain high-risk AI applications.

U.S. Executive Order on AI: Issued in October of 2023, this order focuses on the safe, secure, and trustworthy development and use of AI technologies. It mandates that federal agencies implement robust risk management and governance frameworks for AI.

California GenAI Guidelines: The state of California has issued guidelines for the public sector’s procurement and use of GenAI. These guidelines emphasize the importance of training, risk assessment, and compliance with existing data privacy laws.

Department of Energy GenAI Reference Guide: This guide provides best practices for the responsible development and use of GenAI, reflecting the latest federal guidance and executive orders.

Recommendations

To effectively manage the risks associated with GenAI adoption, organizations should consider the following recommendations:

Establish clear policies and training: Develop and enforce clear policies on the approved use of GenAI. Provide comprehensive training sessions on ethical considerations and data protection to ensure that all employees understand the importance of responsible AI usage.

Continuously reassess strategies: Regularly reassess strategies and practices to keep up with technological advancements. This includes updating security measures, conducting comprehensive risk assessments, and evaluating third-party vendors.

Implement advanced GenAI security solutions: Deploy advanced GenAI solutions to ensure data security while maintaining comprehensive visibility into GenAI usage. Traditional DLP solutions based on keywords and patterns are not enough. GenAI solutions should give proper visibility by understanding the context without the need to define complicated data-types. This approach not only protects sensitive information, but also allows for real-time monitoring and control, ensuring that all GenAI activities are transparent and compliant with organizational and regulatory requirements.

Foster a culture of responsible AI usage: Encourage a culture that prioritizes ethical AI practices. Promote cross-department collaboration between IT, legal, and compliance teams to ensure a unified approach to GenAI governance.

Maintain transparency and compliance: Ensure transparency in AI processes and maintain compliance with data privacy regulations. This involves continuous monitoring and reporting, as well as developing incident response plans that account for AI-specific challenges.

By following these recommendations, organizations can make good use and take advantage of the benefits of GenAI while effectively managing the associated data security and compliance risks.

CISA’s “Secure by Demand” guidance is must-read – CyberTalk

EXECUTIVE SUMMARY:

Earlier today, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI), distributed a new “Secure by Demand” guide.

The intention is to assist organizations in driving a more secure technology ecosystem by ensuring that cyber security is embedded from the start.

“This guidance is a wake-up call for any company that missed out on the costs and outages caused by Solar Winds, Log4J, Snowflake and CrowdStrike,” says Check Point CISO Pete Nicoletti.

Why the guide

In cyber security, procurement teams tend to grasp the fundamentals of cyber security requirements in relation to tech acquisitions. However, teams often fail to identify whether or not vendors truly embed cyber security into development cycles from day one.

The guide is designed to help organizations discern this type of critical information when evaluating vendors. It provides readers with questions to ask when buying software, considerations to work through regarding product integration and security, along with assessment tools that allow for grading of a product’s maturity against “secure-by-design” principles.

More information

The Secure by Demand guide is a companion piece to the recently released Software Acquisition Guide for Government Enterprise Consumers: Software Assurance in the Cyber-Supply Chain Risk Management (C-SCRM) Lifecycle.

While the latter focuses on government enterprises, this guide broadens the scope to encompass a wider range of organizations across various sectors.

Key points to note

  • The two guides work in tandem to provide a comprehensive approach to secure software acquisition and supply chain risk management.
  • While the software acquisition guide targets government entities, the demand guide offers insights that are applicable to private sector organizations, non-profits and other institutions.

CISA strongly advises organizations to thoroughly review and implement the recommendations from both guides.

Each guide offers practical, actionable steps that can be integrated into existing procurement and risk management frameworks. Yet, that alone is not enough, according to Check Point Expert Pete Nicoletti…

“In addition to implementing this guidance, companies should add supply chain-related security events to their incident response planning and tabletop exercises to ensure they can recover quickly and with less impact. Further, review supplier contracts to ensure that expensive outages caused by them, offer up their cyber insurance, rather than just recovering the license cost,” he notes.

Get the Secure by Demand Guide: How Software Customers Can Drive a Secure Technology Ecosystem right here.

Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

A preview of the upcoming Black Hat conference… – CyberTalk

A preview of the upcoming Black Hat conference… – CyberTalk

EXECUTIVE SUMMARY:

One of the leading cyber security conferences globally, Black Hat USA is where intellect meets innovation. The 2024 event is taking place from August 3rd – 8th, at the Mandalay Bay Convention Center in Las Vegas.

The conference is highly regarded for its emphasis on cutting-edge cyber security research, high-caliber presentations, skill development workshops, peer networking opportunities, and for its Business Hall, which showcases innovative cyber security solutions.

Although two other cyber security conferences in Las Vegas will compete for attention next week, Black Hat is widely considered the main draw. Last year, Black Hat USA hosted roughly 20,000 in-person attendees from 127 different countries.

Event information

The Black Hat audience typically includes a mix of cyber security researchers, ethical hackers, cyber security professionals – from system administrators to CISOs – business development professionals, and government security experts.

On the main stage this year, featured speakers include Ann Johnson, the Corporate Vice President and Deputy CISO of Microsoft, Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA), and Harry Coker Jr., National Cyber Director for the United States Executive Office of the President.

The Black Hat CISO Summit, on Monday, August 5th through Tuesday, August 6th, caters to the needs and interests of CISOs and security executives. This track will address topics ranging from the quantification of cyber risk costs, to supply chain security, to cyber crisis management.

Professionals who are certified through ISC2 can earn 5.5 Continuing Professional Education (CPE) credits for CISO Summit attendance.

Why else Black Hat

  • Access to thousands of industry professionals who have similar interests, who can discuss challenges and who can provide new product insights.
  • Access to the latest cyber research, which may not yet be widely available, helping your organization prevent potential attacks before they transform into fast-moving, large-scale issues.
  • Cyber security strategy development in partnership with experts and vendors.
    • Check Point is offering exclusive 1:1 meetings with the company’s cyber security executives. If you plan to attend the event and would like to book a meeting with a Check Point executive, please click here.
  • Community building. Connect with others, collaborate on initiatives and strengthen everyone’s cyber security in the process.

Must-see sessions

If you’re attending the event, plan ahead to make the most of your time. There’s so much to see and do. Looking for a short-list of must-see speaking sessions? Here are a handful of expert-led and highly recommended talks:

  • Enhancing Cloud Security: Preventing Zero-Day Attacks with Modernized WAPs: Wednesday, August 7th, at 11:00am, booth #2936
  • How to Train your AI Co-Pilot: Wednesday, August 7th, at 12:30pm, booth #2936
  • Key Factors in Choosing a SASE Solution: Thursday, August 8th, at 10:45am, booth #2936

Further details

Be ready for anything and bring the best version of yourself – you never know who you’ll meet. They could be your next software developer, corporate manager, business partner, MSSP, or cyber security vendor. Meet us at booth #2936. We can’t wait to see you at Black Hat USA 2024!

For more event information, click here. For additional cutting-edge cyber security insights, click here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Global data breach costs hit all-time high – CyberTalk

EXECUTIVE SUMMARY:

Global data breach costs have hit an all-time high, according to IBM’s annual Cost of a Data Breach report. The tech giant collaborated with the Ponemon institute to study more than 600 organizational breaches between March of 2023 and February of 2024.

The breaches affected 17 industries, across 16 countries and regions, and involved leaks of 2,000-113,000 records per breach. Here’s what researchers found…

Essential information

The global average cost of a data breach is $4.88 million, up nearly 10% from last year’s $4.5 million. Key drivers of the year-over-year cost spike included post-breach third-party expenses, along with lost business.

Global data breach costs hit all-time high – CyberTalk
Image courtesy of IBM.

Over 50% of organizations that were interviewed said that they are passing the breach costs on to customers through higher prices for goods and services.

More key findings

  • For the 14th consecutive year, the U.S. has the highest average data breach costs worldwide; nearly $9.4 million.
  • In the last year, Canada and Japan both experienced drops in average breach costs.
  • Most breaches could be traced back to one of two sources – stolen credentials or a phishing email.
  • Seventy percent of organizations noted that breaches led to “significant” or “very significant” levels of disruption.

Deep-dive insights: AI

The report also observed that an increasing number of organizations are adopting artificial intelligence and automation to prevent breaches. Nearly two-thirds of organizations were found to have deployed AI and automation technologies across security operations centers.

The use of AI prevention workflows reduced the average cost of a breach by $2.2 million. Organizations without AI prevention workflows did not experience these cost savings.

Right now, only 20% of organizations report using gen AI security tools. However, those that have implemented them note a net positive effect. GenAI security tools can mitigate the average cost of a breach by more than $167,000, according to the report.

Deep-dive insights: Cloud

Multi-environment cloud breaches were found to cost more than $5 million to contend with, on average. Out of all breach types, they also took the longest time to identify and contain, reflecting the challenge that is identifying data and protecting it.

In regards to cloud-based breaches, commonly stolen data types included personal identifying information (PII) and intellectual property (IP).

As generative AI initiatives draw this data into new programs and processes, cyber security professionals are encouraged to reassess corresponding security and access controls.

The role of staffing issues

A number of organizations that contended with cyber attacks were found to have under-staffed cyber security teams. Staffing shortages are up 26% compared to last year.

Organizations with cyber security staff shortages averaged an additional $1.76 million in breach costs as compared to organizations with minimal or no staffing issues.

Staffing issues may be contributing to the increased use of AI and automation, which again, have been shown to reduce breach costs.

Further information

For more AI and cloud insights, click here. Access the Cost of a Data Breach 2024 report here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Deepfake misuse & deepfake detection (before it’s too late) – CyberTalk

Deepfake misuse & deepfake detection (before it’s too late) – CyberTalk

Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.

Can you explain how deepfake technology works? 

Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.

GANs sound cool, but technical. Could you break down how they operate?

GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.

The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.

What are some notable examples of deepfake tech’s misuse?

Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.

How was the crisis involving the Zelenskyy deepfake video managed?

The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.

What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?

Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.

Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.

How are law enforcement agencies addressing the challenges posed by deepfake technology?

Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.

What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?

Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.

What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?

On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.

The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.

The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.

Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.

As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.

For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

The future of AI and ML (in 2024) – CyberTalk

EXECUTIVE SUMMARY:

In businesses everywhere, mention of Artificial Intelligence (AI) simultaneously evokes a sense of optimism, enthusiasm and skepticism, if not a certain degree of fear. The AI robots are about to take control of the…sorry, wrong article.

The future of AI and ML in 2024

The rapid advancement of artificial intelligence has led to its widespread integration across industries and ecosystems, including those belonging to both cyber adversaries and cyber defenders.

Hackers hope to get a handle on AI in order to launch new threats at-speed and scale. According to experts, adversarial plans likely include phishing initiatives with ransomware payloads, deepfake scams that deceive executives, and malware scripts that are rewrites of existing threats, enabling the code to evade detection.

“Next year we’ll see more threat actors adopt AI to accelerate and expand every aspect of their toolkit,” says Check Point Threat Intelligence Group Manager, Sergey Shykevich.

AI as a double-edged sword

However, although hackers aim to use AI maliciously, AI is a double-edged sword, and research indicates that it will serve as a valuable force-multiplier for cyber security professionals in 2024 (and beyond). It will continue to transform threat identification, enhance organizations’ security posture, and lead to a safer cyber ecosystem across industries.

“Just as we have seen cyber criminals tap into the potential of AI and ML, so too will cyber defenders. We have already seen significant investment in AI for cyber security, and that will continue as more companies look to guard against advanced threats,” says Shykevich.

The key is leveraging AI’s strengths to counter its own weaknesses.

Leveraging AI’s strengths

Among cyber security professionals, artificial intelligence is often used at the “identification” stage of the SANS Institute’s well-known incident response framework. In other words, AI can help identify incidents in minutes, rather than in hours or days. AI can quickly parse through immense volumes of data to isolate patterns that point to the source and scope of a threat.

A truncated incident identification timeline can lead to faster breach containment, saving organizations on costs. The Cost of a Data Breach 2023 global survey has found that use of AI can speed up breach containment by 100 days (on average), and that AI and automation have delivered cost savings of nearly $1.8 million for individual organizations.

“In the coming year, we must innovate faster than the threats we face to stay one step ahead. Let’s harness the full potential of AI for cybersecurity,” says Shykevich.

Enhancing cyber security posture

In terms of bolstering an organization’s overall security posture, because AI can learn from past threats, AI can vastly improve threat detection capabilities. Using historical data, machine learning algorithms can track patterns and actually develop adaptive, new threat detection methods, making cyber breaches more difficult for adversaries to execute over the long-term.

AI can also automate repetitive tasks, eliminating human error and enabling humans to take on higher-level work. Beyond that, AI can improve the accuracy of decision-making, elevating the competence levels of cyber security teams.

All of these actions, among others, enable AI-powered solutions (and AI-focused security staff) to protect people, processes and technologies better than otherwise possible via traditional cyber security tools. AI is becoming and will continue to establish itself as an invaluable asset within the cyber security landscape.

That said, “In general, while organizations have found that AI is sexy, that doesn’t mean that we need to use AI everywhere. We need to be careful. We need to use it when it’s relevant, and not when it’s irrelevant,” cautions Check Point’s Global CISO emeritus and Field CISO for the EMEA region, Jonathan Fischbein.

A safer cyber ecosystem at-large

AI-based cyber security solutions are becoming increasingly critical components of cyber security stacks, and they’re not only strengthening individual organizations’ security – they’re able to help strengthen third-party security, ultimately strengthening the security of the supply chain and that of industry ecosystems at-large.

Policy makers around the world are convening to address the risks associated with AI and automated systems, working to ensure the security of divergent industries – from critical infrastructure to healthcare –  and protection for those who they serve. “There have been significant steps in Europe and the US in regulating the use of AI,” says Shykevich.

AI is fostering new types of partnerships between humans and machines, which allow for outsized cyber security outcomes – ones that amount to more than the sum of their parts.

Rapid change and growth

In the next few months, industry analysts anticipate continued evolution of AI-based cyber security capabilities, along with creative new use-cases for corresponding applications and code.

AI’s meteoric rise across the past decade, which has massively accelerated within the past year, signals its incredible potential to reshape the cyber landscape. Despite some degree of risk, artificial intelligence presents promise and hope for digital security like never before.


For further information about AI, ML and cyber security, please see the following resources

  • Explore the advantages of implementing AI within cyber security – Learn more
  • For more in-depth AI and cyber security insights, check out this whitepaper – Download now
  • Discover ThreatCloud AI, the brain behind Check Point’s best security – Product information

What is Cryware? What Microsoft wants you to know right now

Microsoft warns of “Cryware” infostealing malware that targets cryptocurrency wallets. What is Cryware? Cryware attacks lead to the irreversible theft of virtual currencies through fraudulent transfers to adversary controlled wallets. Cryware information stealers collect and exfiltrate data directly from “hot” wallets or online cryptocurrency wallets. Due to the fact that hot wallets are […]

Robin Hood ransomware demands goodwill ransom for charity

By Edwin Doyle, Global Security Evangelist, Check Point Software. GoodWill ransomware forces victims to record acts of kindness and to then publish corresponding content on social media. GoodWill ransomware In traditional ransomware attacks, the ransomware operators hold files or networks hostage in exchange for a ransom. They demand anywhere from hundreds to millions of dollars […]