Spamouflage’s advanced deceptive behavior reinforces need for strong email security

EXECUTIVE SUMMARY:

Ahead of the U.S. elections, adversaries are weaponizing social media to gain political sway. Russian and Iranian efforts have become increasingly aggressive and transparent. However, China appears to have taken a more carefully calculated and nuanced approach.

China’s seeming disinformation efforts have little to do with positioning one political candidate as preferable to another. Rather, the country’s maneuvers may aim to undermine trust in voting systems, elections and America, in general; amplifying criticism and sowing discord.

Spamouflage

In recent months, the Chinese disinformation network, known as Spamouflage, has pursued “advanced deceptive behavior.” It has quietly launched thousands of accounts across more than 50 domains, and used them to target people across the United States.

The group has been active since 2017, but has recently reinforced its efforts.

Fake profiles

The Spamouflage network’s fake online accounts present fake identities, which sometimes change on a whim. The accounts/profiles have been spotted on X, TikTok and elsewhere.

For example:

Harlan claimed to be a New York resident and an Army veteran, age 29. His profile picture showed a well-groomed young man. However, a few months later, his account shifted personas. Suddenly, Harlan appeared to be from Florida and a 31 year-old
Republican influencer. 

At least four different accounts were found to mimic Trump supporters – part of a tactic with the moniker “MAGAflage.”

The fake profiles, including the fake photos, may have been generated through artificial intelligence tools, according to analysts.

Accounts have exhibited certain patterns, using hashtags like #American, while presenting themselves as voters or groups that “love America” but feel alienated by political issues that range from women’s healthcare to Ukraine.

In June, one post on X read “Although I am American, I am extremely opposed to NATO and the behavior of the U.S. government in war. I think soldiers should protect their own country’s people and territory…should not initiate wars on their own…” The text was accompanied by an image showing NATO’s expansion across Europe.

Email security implications

Disinformation campaigns that create (and weaponize) fake profiles, as described above, will have a high degree of success when crafting and distributing phishing emails, as the emails will appear to come from credible sources.

This makes it essential for organizations to implement and for employees to adhere to advanced verification methods that can ensure the veracity of communications.

Advanced email security protocols

Within your organization, if you haven’t done so already, consider implementing the following:

  • Multi-factor authentication. Even if credentials are compromised via phishing, MFA can help protect against unauthorized account access.
  • Email authentication protocols. Technologies such as SPF, DKIM and DMARC can assist with verifying the legitimacy of email senders and spoofing prevention.
  • Advanced threat detection. Advanced threat detection solutions that are powered by AI and machine learning can enhance email traffic security.
  • Employee awareness. Remind employees to not only think before they click, but to also think before they link to information – whether in their professional roles or their personal lives.
  • Incident response plans. Most organizations have incident response plans. But are they routinely updated? Can they address disinformation and deepfake threats?

Further thoughts

To effectively counter threats, organizations need to pursue a dynamic, multi-dimensional approach. But it’s tough.

To get expert guidance, please visit our website or contact our experts. We’re here to help!

Generative AI adoption: Strategic implications & security concerns – CyberTalk

By Manuel Rodriguez. With more than 15 years of experience in cyber security, Manuel Rodriguez is currently the Security Engineering Manager for the North of Latin America at Check Point Software Technologies, where he leads a team of high-level professionals whose objective is to help organizations and businesses meet cyber security needs. Manuel joined Check Point in 2015 and initially worked as a Security Engineer, covering Central America, where he participated in the development of important projects for multiple clients in the region. He had previously served in leadership roles for various cyber security solution providers in Colombia.

Technology evolves very quickly. We often see innovations that are groundbreaking and have the potential to change the way we live and do business. Although artificial intelligence is not necessarily new, in November of 2022 ChatGPT was released, giving the general public access to a technology we know as Generative Artificial Intelligence (GenAI). It was in a short time from then to the point where people and organizations realized it could help them gain a competitive advantage.

Over the past year, organizational adoption of GenAI has nearly doubled, showing the growing interest in embracing this kind of technology. This surge isn’t a temporary trend; it is a clear indication of the impact GenAI is already having and that it will continue to have in the coming years across various industry sectors.

The surge in adoption

Recent data reveals that 65% of organizations are now regularly using generative AI, with overall AI adoption jumping to 72% this year. This rapid increase shows the growing recognition of GenAI’s potential to drive innovation and efficiency. One analyst firm predicts that by 2026, over 80% of enterprises will be utilizing GenAI APIs or applications, highlighting the importance that businesses are giving to integrating this technology into their strategic frameworks.

Building trust and addressing concerns

Although adoption is increasing very fast in organizations, the percentage of the workforce with access to this kind of technology still relatively low. In a recent survey by Deloitte, it was found that 46% of organizations provide approved Generative AI access to 20% or less of their workforce. When asked for the reason behind this, the main answer was around risk and reward. Aligned with that, 92% of business leaders see moderate to high-risk concerns with GenAI.

As organizations scale their GenAI deployments, concerns increase around data security, quality, and explainability. Addressing these issues is essential to generate confidence among stakeholders and ensure the responsible use of AI technologies.

Data security

The adoption of Generative AI (GenAI) in organizations comes with various data security risks. One of the primary concerns is the unauthorized use of GenAI tools, which can lead to data integrity issues and potential breaches. Shadow GenAI, where employees use unapproved GenAI applications, can lead to data leaks, privacy issues and compliance violations.

Clearly defining the GenAI policy in the organization and having appropriate visibility and control over the shared information through these applications will help organizations mitigate this risk and maintain compliance with security regulations. Additionally, real-time user coaching and training has proven effective in altering user actions and reducing data risks.

Compliance and regulations

Compliance with data privacy regulations is a critical aspect of GenAI adoption. Non-compliance can lead to significant legal and financial repercussions. Organizations must ensure that their GenAI tools and practices adhere to relevant regulations, such as GDPR, HIPPA, CCPA and others.

Visibility, monitoring and reporting are essential for compliance, as they provide the necessary oversight to ensure that GenAI applications are used appropriately. Unauthorized or improper use of GenAI tools can lead to regulatory breaches, making it imperative to have clear policies and governance structures in place. Intellectual property challenges also arise from generating infringing content, which can further complicate compliance efforts.

To address these challenges, organizations should establish a robust framework for GenAI governance. This includes developing a comprehensive AI ethics policy that defines acceptable use cases and categorizes data usage based on organizational roles and functions. Monitoring systems are essential for detecting unauthorized GenAI activities and ensuring compliance with regulations.

Specific regulations for GenAI

Several specific regulations and guidelines have been developed or are in the works to address the unique challenges posed by GenAI. Some of those are more focused on the development of new AI tools while others as the California GenAI Guidelines focused on purchase and use. Examples include:

EU AI Act: This landmark regulation aims to ensure the safe and trustworthy use of AI, including GenAI. It includes provisions for risk assessments, technical documentation standards, and bans on certain high-risk AI applications.

U.S. Executive Order on AI: Issued in October of 2023, this order focuses on the safe, secure, and trustworthy development and use of AI technologies. It mandates that federal agencies implement robust risk management and governance frameworks for AI.

California GenAI Guidelines: The state of California has issued guidelines for the public sector’s procurement and use of GenAI. These guidelines emphasize the importance of training, risk assessment, and compliance with existing data privacy laws.

Department of Energy GenAI Reference Guide: This guide provides best practices for the responsible development and use of GenAI, reflecting the latest federal guidance and executive orders.

Recommendations

To effectively manage the risks associated with GenAI adoption, organizations should consider the following recommendations:

Establish clear policies and training: Develop and enforce clear policies on the approved use of GenAI. Provide comprehensive training sessions on ethical considerations and data protection to ensure that all employees understand the importance of responsible AI usage.

Continuously reassess strategies: Regularly reassess strategies and practices to keep up with technological advancements. This includes updating security measures, conducting comprehensive risk assessments, and evaluating third-party vendors.

Implement advanced GenAI security solutions: Deploy advanced GenAI solutions to ensure data security while maintaining comprehensive visibility into GenAI usage. Traditional DLP solutions based on keywords and patterns are not enough. GenAI solutions should give proper visibility by understanding the context without the need to define complicated data-types. This approach not only protects sensitive information, but also allows for real-time monitoring and control, ensuring that all GenAI activities are transparent and compliant with organizational and regulatory requirements.

Foster a culture of responsible AI usage: Encourage a culture that prioritizes ethical AI practices. Promote cross-department collaboration between IT, legal, and compliance teams to ensure a unified approach to GenAI governance.

Maintain transparency and compliance: Ensure transparency in AI processes and maintain compliance with data privacy regulations. This involves continuous monitoring and reporting, as well as developing incident response plans that account for AI-specific challenges.

By following these recommendations, organizations can make good use and take advantage of the benefits of GenAI while effectively managing the associated data security and compliance risks.

CISA’s “Secure by Demand” guidance is must-read – CyberTalk

EXECUTIVE SUMMARY:

Earlier today, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI), distributed a new “Secure by Demand” guide.

The intention is to assist organizations in driving a more secure technology ecosystem by ensuring that cyber security is embedded from the start.

“This guidance is a wake-up call for any company that missed out on the costs and outages caused by Solar Winds, Log4J, Snowflake and CrowdStrike,” says Check Point CISO Pete Nicoletti.

Why the guide

In cyber security, procurement teams tend to grasp the fundamentals of cyber security requirements in relation to tech acquisitions. However, teams often fail to identify whether or not vendors truly embed cyber security into development cycles from day one.

The guide is designed to help organizations discern this type of critical information when evaluating vendors. It provides readers with questions to ask when buying software, considerations to work through regarding product integration and security, along with assessment tools that allow for grading of a product’s maturity against “secure-by-design” principles.

More information

The Secure by Demand guide is a companion piece to the recently released Software Acquisition Guide for Government Enterprise Consumers: Software Assurance in the Cyber-Supply Chain Risk Management (C-SCRM) Lifecycle.

While the latter focuses on government enterprises, this guide broadens the scope to encompass a wider range of organizations across various sectors.

Key points to note

  • The two guides work in tandem to provide a comprehensive approach to secure software acquisition and supply chain risk management.
  • While the software acquisition guide targets government entities, the demand guide offers insights that are applicable to private sector organizations, non-profits and other institutions.

CISA strongly advises organizations to thoroughly review and implement the recommendations from both guides.

Each guide offers practical, actionable steps that can be integrated into existing procurement and risk management frameworks. Yet, that alone is not enough, according to Check Point Expert Pete Nicoletti…

“In addition to implementing this guidance, companies should add supply chain-related security events to their incident response planning and tabletop exercises to ensure they can recover quickly and with less impact. Further, review supplier contracts to ensure that expensive outages caused by them, offer up their cyber insurance, rather than just recovering the license cost,” he notes.

Get the Secure by Demand Guide: How Software Customers Can Drive a Secure Technology Ecosystem right here.

Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

A preview of the upcoming Black Hat conference… – CyberTalk

A preview of the upcoming Black Hat conference… – CyberTalk

EXECUTIVE SUMMARY:

One of the leading cyber security conferences globally, Black Hat USA is where intellect meets innovation. The 2024 event is taking place from August 3rd – 8th, at the Mandalay Bay Convention Center in Las Vegas.

The conference is highly regarded for its emphasis on cutting-edge cyber security research, high-caliber presentations, skill development workshops, peer networking opportunities, and for its Business Hall, which showcases innovative cyber security solutions.

Although two other cyber security conferences in Las Vegas will compete for attention next week, Black Hat is widely considered the main draw. Last year, Black Hat USA hosted roughly 20,000 in-person attendees from 127 different countries.

Event information

The Black Hat audience typically includes a mix of cyber security researchers, ethical hackers, cyber security professionals – from system administrators to CISOs – business development professionals, and government security experts.

On the main stage this year, featured speakers include Ann Johnson, the Corporate Vice President and Deputy CISO of Microsoft, Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA), and Harry Coker Jr., National Cyber Director for the United States Executive Office of the President.

The Black Hat CISO Summit, on Monday, August 5th through Tuesday, August 6th, caters to the needs and interests of CISOs and security executives. This track will address topics ranging from the quantification of cyber risk costs, to supply chain security, to cyber crisis management.

Professionals who are certified through ISC2 can earn 5.5 Continuing Professional Education (CPE) credits for CISO Summit attendance.

Why else Black Hat

  • Access to thousands of industry professionals who have similar interests, who can discuss challenges and who can provide new product insights.
  • Access to the latest cyber research, which may not yet be widely available, helping your organization prevent potential attacks before they transform into fast-moving, large-scale issues.
  • Cyber security strategy development in partnership with experts and vendors.
    • Check Point is offering exclusive 1:1 meetings with the company’s cyber security executives. If you plan to attend the event and would like to book a meeting with a Check Point executive, please click here.
  • Community building. Connect with others, collaborate on initiatives and strengthen everyone’s cyber security in the process.

Must-see sessions

If you’re attending the event, plan ahead to make the most of your time. There’s so much to see and do. Looking for a short-list of must-see speaking sessions? Here are a handful of expert-led and highly recommended talks:

  • Enhancing Cloud Security: Preventing Zero-Day Attacks with Modernized WAPs: Wednesday, August 7th, at 11:00am, booth #2936
  • How to Train your AI Co-Pilot: Wednesday, August 7th, at 12:30pm, booth #2936
  • Key Factors in Choosing a SASE Solution: Thursday, August 8th, at 10:45am, booth #2936

Further details

Be ready for anything and bring the best version of yourself – you never know who you’ll meet. They could be your next software developer, corporate manager, business partner, MSSP, or cyber security vendor. Meet us at booth #2936. We can’t wait to see you at Black Hat USA 2024!

For more event information, click here. For additional cutting-edge cyber security insights, click here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Global data breach costs hit all-time high – CyberTalk

EXECUTIVE SUMMARY:

Global data breach costs have hit an all-time high, according to IBM’s annual Cost of a Data Breach report. The tech giant collaborated with the Ponemon institute to study more than 600 organizational breaches between March of 2023 and February of 2024.

The breaches affected 17 industries, across 16 countries and regions, and involved leaks of 2,000-113,000 records per breach. Here’s what researchers found…

Essential information

The global average cost of a data breach is $4.88 million, up nearly 10% from last year’s $4.5 million. Key drivers of the year-over-year cost spike included post-breach third-party expenses, along with lost business.

Global data breach costs hit all-time high – CyberTalk
Image courtesy of IBM.

Over 50% of organizations that were interviewed said that they are passing the breach costs on to customers through higher prices for goods and services.

More key findings

  • For the 14th consecutive year, the U.S. has the highest average data breach costs worldwide; nearly $9.4 million.
  • In the last year, Canada and Japan both experienced drops in average breach costs.
  • Most breaches could be traced back to one of two sources – stolen credentials or a phishing email.
  • Seventy percent of organizations noted that breaches led to “significant” or “very significant” levels of disruption.

Deep-dive insights: AI

The report also observed that an increasing number of organizations are adopting artificial intelligence and automation to prevent breaches. Nearly two-thirds of organizations were found to have deployed AI and automation technologies across security operations centers.

The use of AI prevention workflows reduced the average cost of a breach by $2.2 million. Organizations without AI prevention workflows did not experience these cost savings.

Right now, only 20% of organizations report using gen AI security tools. However, those that have implemented them note a net positive effect. GenAI security tools can mitigate the average cost of a breach by more than $167,000, according to the report.

Deep-dive insights: Cloud

Multi-environment cloud breaches were found to cost more than $5 million to contend with, on average. Out of all breach types, they also took the longest time to identify and contain, reflecting the challenge that is identifying data and protecting it.

In regards to cloud-based breaches, commonly stolen data types included personal identifying information (PII) and intellectual property (IP).

As generative AI initiatives draw this data into new programs and processes, cyber security professionals are encouraged to reassess corresponding security and access controls.

The role of staffing issues

A number of organizations that contended with cyber attacks were found to have under-staffed cyber security teams. Staffing shortages are up 26% compared to last year.

Organizations with cyber security staff shortages averaged an additional $1.76 million in breach costs as compared to organizations with minimal or no staffing issues.

Staffing issues may be contributing to the increased use of AI and automation, which again, have been shown to reduce breach costs.

Further information

For more AI and cloud insights, click here. Access the Cost of a Data Breach 2024 report here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Deepfake misuse & deepfake detection (before it’s too late) – CyberTalk

Deepfake misuse & deepfake detection (before it’s too late) – CyberTalk

Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.

Can you explain how deepfake technology works? 

Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.

GANs sound cool, but technical. Could you break down how they operate?

GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.

The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.

What are some notable examples of deepfake tech’s misuse?

Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.

How was the crisis involving the Zelenskyy deepfake video managed?

The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.

What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?

Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.

Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.

How are law enforcement agencies addressing the challenges posed by deepfake technology?

Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.

What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?

Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.

What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?

On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.

The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.

The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.

Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.

As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.

For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Evolving cyber security in the financial services sector – CyberTalk

Evolving cyber security in the financial services sector – CyberTalk

EXECUTIVE SUMMARY:

The financial sector is a leading target for cyber criminals and cyber criminal attacks. Markedly improving the sector’s cyber security and resilience capabilities are a must. While the sector does have a comparatively high level of cyber security maturity, security gaps invariably persist and threaten to subvert systems.

As Check Point CISO Pete Nicoletti has noted, attackers only need to get it right once in order to catalyze strongly negative, systemic consequences that could send shockwaves throughout companies and lives across the globe.

In this article, discover financial sector trends, challenges and recommendations that can transform how you see and respond to the current cyber threat landscape.

Industry trends

  • According to a newly emergent report, 65% of financial services sector organizations have endured cyber attacks.
  • The median ransom demand is $2 million. Mean recovery costs have soared to roughly $2.6 million – up from $2.2 million in 2023.
  • The size of extreme losses has quadrupled since 2017, to $2.5 billion.

The potential for losses is substantial, especially when multiplied in order to account for downstream effects.

Industry challenges

The majority of financial leaders lack confidence in their organization’s cyber security capabilities, according to the latest research.

Eighty-percent of financial service firm leaders say that they’re unable to lead future planning efforts effectively due to concerns regarding their organization’s abilities to thwart a cyber attack.

There is a significant gap between where financial sector institutions want to be with cyber security and where the industry is right now.

Preparing for disruption

Beyond cyber security, financial sector groups need to concern themselves with business continuity in the event of disruption — which is perhaps more likely than not.

“While cyber incidents will occur, the financial sector needs the capacity to deliver critical business services during these disruptions,” writes the International Monetary Fund.

A major disruption – the financial sector equivalent of the Colonial Pipeline attack – could disable infrastructure, erode confidence in the financial system, or lead to bank runs and market selloffs.

To put the idea into sharper relief, in December of 2023, the Central Bank of Lesotho experienced outages after a cyber attack. While the public did not suffer financial losses, the national payment system could not honor inter-bank transactions for some time.

Industry recommendations

Organizations need innovative approaches to cyber security — approaches that prevent the latest and most sophisticated threats. Approaches that fend off disaster from a distance.

In 2023, nearly 30 different malware families targeted 1,800 banking applications across 61 different nations.

At Check Point, our AI-powered, cloud-delivered cyber security architecture addresses everything — networks, endpoints, cloud environments and mobile devices via a unified approach.

We’ve helped thousands of organizations, like yours, mitigate risks and expand business resilience. Learn more here.

For additional financial services insights, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

SEC charges against SolarWinds largely dismissed – CyberTalk

SEC charges against SolarWinds largely dismissed – CyberTalk

EXECUTIVE SUMMARY:

In a landmark case, a judge dismissed most of the charges against the SolarWinds software company and its CISO, Timothy Brown.

On July 18th, U.S. District Judge Paul Engelmayer stated that the majority of government charges against SolarWinds “impermissibly rely on hindsight and speculation.”

The singular SEC allegation that the judge considered credible concerns the failure of controls embedded in SolarWinds products.

For its part, SolarWinds has consistently maintained that the SEC’s allegations were fundamentally flawed, outside of its area of expertise, and a ‘trick designed to allow for a rewrite of the law.

Why it matters

For some time, the SEC has pursued new policies intended to hold businesses accountable for cyber security practices; an understandable and reasonable objective.

In this instance, the SEC said that claims made to investors in regards to cyber security practices had been misleading and false – across a three year period.

The SEC’s indictment also mentioned falsified reports on internal controls, incomplete disclosure of the cyber attack, negligence around “red flags” and existing risks, and more.

But what caught the attention of many in the cyber security community was that, in an unprecedented maneuver, the SEC aimed to hold CISO Timothy Brown personally liable.

This case has been closely watched among cyber security professionals and was widely seen as precedent-setting for future potential software supply chain attack events.

Timothy Brown’s clearance

In the end, the court ruling does not hold CISO Timothy Brown personally liable for the breach.

“Holding CISOs personally liable, especially those CISOs that do not hold a position on the executive committee, is deeply flawed and would have set a precedent that would be counterproductive and weaken the security posture of organizations,” says Fred Kwong, Ph.D, vice president and CISO of DeVry University.

Despite the fact that this court ruling may loosen some CISO constraints, “you need to be honest about your security posture,” says Kwong.

The remaining claim against the company, which will be scrutinized further in court, indicates that there is a basis on which to conclude that CISOs do have certain disclosure obligations under the federal securities laws.

Further details

The SolarWinds incident, as its come to be known, has cost SolarWinds tens of millions of dollars. In 2023, the company settled a shareholder lawsuit to the tune of $26 million.

A spokesperson for SolarWinds has stated that the company is “pleased” with Judge Engelmayer’s decision to dismiss most of the SEC’s claims. The company plans to demonstrate why the remaining claim is “factually inaccurate” at the next opportunity.

For expert insights into and analyses of the SolarWinds case, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

AI-powered protection, redefining resilience – CyberTalk

AI-powered protection, redefining resilience – CyberTalk

EXECUTIVE SUMMARY:

At Check Point, on AI Appreciation Day, we’re reflecting on the pivotal role of artificial intelligence in cyber security.

Although AI provides new capabilities for cyber criminals, as Check Point expert Keely Wilkins points out, “AI is just the mechanism used to commit the crime.” If AI didn’t exist, cyber criminals would find other means of augmenting their schemes.

Check Point and AI

At Check Point, we’ve integrated AI-powered solutions into our product suite, redefining proactive cyber security. Our algorithms can analyze billions of data points in real-time, identifying novel threats before they surface as substantive issues.

These types of predictive capabilities, and other AI-powered advantages, are not only technologically impressive, but they’re also critical in a world where cyber attacks are listed as a top 5 global risk and where the attacks are becoming significantly more complex everyday.

AI, cyber security and CXOs

For C-suite executives, embracing AI in cyber security is a strategic imperative. AI in cyber can increase protection for sensitive data, lead to cost efficiencies and strengthen operational resilience. In greater detail, here’s what we mean:

  • Enhanced risk management. AI-powered cyber security solutions can zero in on potential vulnerabilities, predict threat vectors and prioritize threats based on potential impact. In turn, this empowers professionals to make more informed decisions regarding resource allocation and risk management approaches.
  • Cost efficiency and ROI. While the initial investment in AI-driven cyber security may be a challenge, the long-term cost savings can justify the expense. AI can automate many routine security tasks. As a result, organizations can ‘close the talent gap’ while minimizing human error, and reducing breaches, which can come with huge financial penalties. CXOs can leverage the aforementioned cost efficiencies to prove the value of AI security investments and to demonstrate a clear ROI to the board.
  • Compliance and regulatory adherence. AI can help organizations effectively maintain regulatory compliance. AI-powered cyber security systems can monitor for compliance violations, automate reporting processes and adapt to new regulatory rules.
  • Operational resilience. As previously alluded to, AI-powered cyber security can respond to threats in real-time, allowing for threat containment before escalation occurs. AI-powered tools are also known for their abilities launch recovery processes on their own, providing unprecedented resilience capabilities.

AI and the human element

It’s easy to envision a business environment where AI accounts for all cyber security tasks, with limited work left for humans. However, at this point in time, as Check Point expert Keely Wilkins explains, “AI is [still just] a tool that the human at the helm uses to perform a task,” it’s not a panacea, and it won’t replace humans altogether.

For example, although AI can flag potential threats and anomalies, human experts are still required to interpret the findings within the broader context of an organization’s operations and risk profile.

The future of cyber security is one where AI enhances human capabilities. At Check Point, we’re committed to developing AI solutions that empower human experts. For insights into Check Point’s AI-powered, cloud-delivered security solutions, click here.

For additional AI insights from Cyber Talk, click here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

AI data security: 8 essential steps for CISOs in the age of generative AI – CyberTalk

AI data security: 8 essential steps for CISOs in the age of generative AI – CyberTalk

EXECUTIVE SUMMARY:

Artificial intelligence and large language models are transforming how organizations operate. They’re also generating vast quantities of data – including synthetic text, code, conversational data and even multi-media content. This introduces increased potential for organizations to encounter hacking, data breaches and data theft.

This article outlines eight essential steps that cyber security stakeholders can take to strengthen AI data security in an age where AI usage is rapidly accelerating and the societal consensus on AI regulation remains elusive.

AI data security: 8 essential steps

1. Risk assessment. The foundation of any effective security strategy is, of course, a thorough risk assessment. CISOs should conduct a comprehensive evaluation of their organization’s AI systems, identifying potential vulnerabilities, threats, and their potential impact.

This assessment should encompass the entire AI lifecycle, from data acquisition and model development to deployment and monitoring. By understanding the specific risks associated with AI initiatives, cyber security teams can prioritize and implement targeted security and mitigation strategies.

2. Robust governance framework. Effective AI data security requires a strong governance structure. CISOs need to develop a comprehensive framework that outlines data ownership, access controls, usage policies, and retention guidelines. This framework should align with relevant regulations, while incorporating principles of data minimization and privacy-by-design. Clear governance not only minimizes the risk of data breaches, but also ensures compliance with legal and ethical codes.

3. Secure development and deployment practices. As AI systems and security features are developed, cyber security teams need to ensure secure coding practices, vulnerability testing and threat modeling (where possible). In addition, security controls need to be put in-place, as to protect AI models and infrastructure from unauthorized access or data loss. Prioritizing cyber security from the outset will enable organizations to reduce the probability that vulnerabilities will be introduced into production systems.

4. Protect training data. Cyber security professionals need to implement stringent security measures to protect the integrity and confidentiality of training data. This includes data anonymization, encryption and access controls, regular integrity checks to detect unauthorized modifications, and monitoring of data for adversarial inputs.

5. Enhanced network security. AI systems often require significant computational resources across distributed environments. CISOs must ensure that the network infrastructure supporting AI operations is highly secure. Key measures include implementing network segmentation to isolate AI systems, utilizing next-generation firewalls and intrusion detection/prevention systems, and ensuring regular patching and updates of all systems in the AI infrastructure.

6. Advanced authentication and access controls. Given the sensitive nature of AI systems and data, robust authentication and access control mechanisms are essential. Cyber security teams should implement multi-factor authentication, role-based access controls, just-in-time provisioning for sensitive AI operations, and privileged access management for AI administrators and developers. These measures help ensure that only authorized personnel can access AI systems and data, reducing the risk of insider threats and unauthorized data exposure.

7. AI-specific incident response and recovery plans. While prevention is crucial, organizations must also prepare for potential AI-related security incidents. Cyber security professionals should develop and regularly test incident response and recovery plans tailored to AI systems. These plans should address forensic analysis of compromised AI models or data, communication protocols for stakeholders and regulatory bodies, and business continuity measures for AI-dependent operations.

8. Continuous monitoring and adaptation. AI data security is an ongoing commitment that requires constant vigilance. Implementing robust monitoring systems and processes is essential to ensure the continued security and integrity of AI operations. This includes real-time monitoring of AI system behavior and performance, anomaly detection to identify potential security threats or breaches, continuous evaluation of AI model performance and potential drift, and monitoring of emerging threats in the AI landscape.

Further thoughts

As AI and large language models continue to advance, the security challenges they present will only grow more complex. The journey towards effective AI data security requires a holistic approach that encompasses technology, processes, and people. Stay ahead of the curve by implementing the aforementioned means of ensuring robust AI data security.

Prepare for what’s next with the power of artificial intelligence and machine learning. Get detailed information about Check Point Infinity here.

Get more CyberTalk insights about AI here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.