10 ways to reduce your risk of cyber fraud – CyberTalk


This year, in honor of International Fraud Awareness Week, we’re excited about sharing actionable ways in which your organization can reduce the risk of cyber fraud.

Digitally enabled fraud can undermine your enterprise in seconds, causing minor meltdowns, miscommunications and market losses. The impact of fraud extends beyond operational deceleration and financial dips. Fraud erodes trust in a business, potentially leading to a dry pipeline, low sales volumes and business bankruptcies.

In our speed-driven business culture, the impulse may be to move fast to survive, without taking the time to enhance fraud prevention. But ‘the faster things move’ doesn’t necessarily translate to ‘the quicker they improve’.

A consistent, methodical approach to fraud prevention can lead to greater certainty around revenue, better customer experiences, and more engaged employees – all of which contribute to a stronger, healthier business.

Recognize fraud prevention as an opportunity through which to achieve better business outcomes. Here are 10 pro tips to help your business reduce cyber fraud risk:

1. Stop phishing. Phishing involves messages sent from fraudsters, who are either posing as legitimate organizations or individuals. It’s a leading catalyst of corporate swindles. To prevent phishing, secure inbound, outbound and internal emails.

Implement robust email security tools that can identify novel email schemes, eliminate threats before they reach users (without affecting workflow or productivity) and that provide granular insights into the types of phishing attacks hitting your organization.

2. Get tough on passwords. Fraud may be committed by someone – either internally or externally — who’s broken into your corporate accounts. Ensure that everyone within your organization uses tough-to-crack passwords that involve letters, numbers and symbols.

Be sure that passwords are changed frequently. Prohibit the use of shared usernames and passwords. When employees depart from the company, ensure that login information is updated efficiently.

Beyond that, change your wireless network default password, along with the default name used to identify your network. Avoid sharing the network name widely and consider encrypting the network

3. Transaction monitoring. Fraudsters may attempt any of a variety of different types of payment fraud. To block this type of brazen business abuse, review and reconcile bank accounts on a daily basis. This will allow your organization to observe discrepancies and to take action around suspicious transactions or missing payments.

In addition, when it comes to account-related requests made by company executives, consider requesting that all orders and changes are verified by phone or in-person, rather than relying on email confirmation alone.

4. Machine learning (ML). Some organizations encounter deceptive, duplicitous attempts on a frequent basis. This is where the volume of data might overwhelm teams. It’s also where machine learning can help teams scale fraud prevention.

A machine learning system can study historical patterns, sift through massive volumes of data, identify new patterns, and suggest risk management rules accordingly. Due to the nature of ML tools, these systems can also improve over time, providing increasingly helpful insights and analysis, while lessening the burden for your security team.

5. Routine fraud reviews. All enterprises should evaluate the utility of existing fraud-prevention software and procedures to ensure that they effectively safeguard against fraud (and haven’t been manipulated in any way). Some organizations find it helpful to have both in-house staff and trusted external partners carry out such reviews.

6. Security training for executives. Fraud schemes commonly target or involve impersonation of the c-suite. These types of attacks are notoriously difficult to detect and defend against, especially if top management lacks a high level of security awareness.

Ensure that top management receives dedicated cyber security training. Leaders need to know about BEC scams, deepfakes, spear phishing and whaling to not only avoid business compromise, but also to set an example for the rest of the organization.

7. Implement data encryption. Fraudsters may attempt to peer into a business’s email transactions to obtain information, enabling the fraudsters to execute well-disguised scams at a later point in time.

Protect sensitive information during transmission and storage. Encryption simply provides another layer of security. Should the data see interception, fraudsters won’t be able to parse through it and weaponize the information for nefarious purposes.

8. Collaborate with experts. Partner with industry experts and consultants to make more informed decisions, to identify weaknesses within your security, and to develop strong business resilience strategies.

9. Establish a cyber security oversight committee. A dedicated cyber security oversight committee —comprised of key executives and experts— can provide strategic direction, oversee cyber security initiatives, and ensure that the organization remains proactive in addressing evolving threats.

10. Appropriate technology. If your organization has work-from-home and/or ‘hybrid’ employees, do you have the technology in-place to both run fast and achieve robust cyber security control at the same time? Solve connectivity and security issues with SASE, SSE and automation. Learn more here and here.

Further thoughts

Although a few of these recommendations reflect basic best practices, even the most basic controls can avert a cyber attack — the key is to do the basics well and to then elevate your cyber security and fraud prevention measures with more sophisticated prevention techniques.

Stay ahead of fraudsters. For more fraud fighting tips, please see CyberTalk.org’s past coverage. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

Kerberoasting attack technique explained and prevention tips


A wave of Kerberoasting attacks is stirring up cyber security concerns.

In the last 12 months, cyber security researchers have observed a 583% surge in this attack type — a worrying trend, especially since the attacks can be deployed in tandem with ransomware, leading to devastating consequences for targeted organizations.

Among cyber criminals, the appeal of Kerberoasting attacks lies in their potential to deliver comprehensive access to an organization’s entire IT infrastructure.

What is Kerberoasting?

Kerberoasting is a privilege escalation attack. At its core, Kerberoasting exploits vulnerabilities in the Kerberos authentication protocols utilized by Windows devices to gain access to IT environments; based on service principle names (SPNs).

Developed at MIT in the 1980s, the Kerberos authentication protocol aimed to facilitate secure identity verification without transmitting plaintext passwords over a network. Over time, the protocol became the default authentication mechanism for operating systems.

Kerberoasting origins

This attack vector isn’t new (it’s been extant since 2014). The first known Kerberoasting attacks focused on government agencies and financial institutions. Eventually, this attack type declined in popularity among hackers.

However, recent observations indicate a resurgence, driven by weaknesses inherent in the complexity of modern computing infrastructure. Most recently, state-backed cyber criminals leveraged Kerberoasting in a series of supply chain attacks.

Kerberoasting has also been observed in connection with other attack types, like ransomware and data exfiltration.

The ”Vice Spider” crime group

One cyber crime crew in particular has made extensive use of the technique. Known as “Vice Spider,” these hackers are thought to be accountable for nearly 30% of all observed Kerberoasting-related network intrusions.

How Kerberoasting attacks work

Typically, cyber criminals who deploy Kerberoasting attacks aim to gain control of a network’s service accounts by interacting with a domain controller’s ticket-granting server service. They use an authenticated account and then request service tickets associated with SPNs connected to vulnerable accounts.

The service tickets contain encrypted data. Offline, the attackers subsequently break through the encryption to reveal plain-text passwords, providing them with unfettered access to critical systems.

Why Kerberoasting attacks work

Among cyber criminals, Kerberoasting attacks are lauded for their stealth. These attacks operate without generating any noticeable alerts or conspicuous activities within the network.

Cyber criminals launching Kerberoasting attacks are also starting to incorporate automation within attack techniques. As a result, Kerberoasting attacks can be challenging to detect and tough to mitigate.

Kerberoasting attack prevention tips

To counter the growing risk posed by Kerberoasting attacks, a multi-layered cyber security strategy is a must.

  • Strengthening password policies for both service and user accounts is crucial, as weak passwords often facilitate the success of these attacks.
  • Cyber security professionals also need to recognize Kerberoasting attack indicators, such as unusual service ticket requests, failed login or unauthorized access attempts and unusual network traffic patterns.
  • Further, organizations can enhance their security by adopting encryption for network traffic, helping to thwart attackers who try to intercept and expose sensitive information.

For more insights into safeguarding your digital assets and maintaining cyber resilience, please check out this Cyber Talk sponsored eBook and see CyberTalk.org’s past coverage. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

What is liquid data storage? – CyberTalk


If you thought that the use of liquid nitrogen to flash freeze ice cream sounded ambitious and exotic (and if you’ve never heard of that, it probably still sounds wild), this endeavor is even more-so:

Because data is proliferating at such a rapid rate, our current storage technologies likely won’t remain adequate in perpetuity. By 2040, it’s estimated that humans will have produced as much as three septillion bits of data (that’s 3 followed by 24 zeros). By then, the earth might be depleted of the materials required to continue storing data through current methods.

Industry innovators are in tinkering mode when it comes to developing new data storage technologies. Among leading unicorns and potential future breakthroughs is what’s known as ‘liquid data storage,’ an approach that leverages the power of nanoparticles suspended in liquid to expand data storage capacity.

Liquid data storage

As previously noted, as data volumes surge, traditional computer bits, limited to the binary states of 0 and 1, are facing constraints. But nanoparticles suspended in liquid can be used to store tremendous quantities of data – one terabyte per tablespoon.

The unique capability of nanoparticles to configure themselves around a central sphere offers a dynamic system in which data can be stored by trapping particles into specific configurations.

It’s all in the details… 

In this system, the size of the central sphere dictates the storage and retrieval of data. When small, the sphere locks particles into a specific arrangement, encoding data. Expansion of the sphere allows for reconfiguration of particles, enabling the storage of different information.

The process offers a flexible and efficient approach to data storage, challenging the limitations of traditional methods.

DNA and holography storage

It’s not just liquid data storage that holds promise and potential…

Microsoft Azure Chief Technical Officer, Mark Russinovich, has previously revealed working prototypes for data storage systems based on DNA and holography.

In a DNA system, the data ‘lives’ in a liquid suspension that contains DNA. It is “read” using systems that combine molecular and electronic elements.

While the prototypes require continued engineering and reconfiguration in order to commercialize them at-scale, this novel way of efficiently storing data could enter the technology landscape at some point in the future.

At present, storing an exabyte of data requires two Azure data centers (each roughly the size of a Walmart store), but DNA storage could theoretically contain that exabyte in a single cubic centimeter of space.

Beyond sci-fi

Although these advancements may seem like science fiction, significant investments from both industry and academia are fueling “moonshot” research endeavors.

The quest for more efficient, compact and scalable data storage solutions is pushing the boundaries of innovation.

For more insights into the latest technology trends, please see CyberTalk.org’s past coverage. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

Exploring the future of IoT: Challenges and opportunities – CyberTalk

Miri Ofir is the Research and Development Director at Check Point Software.

Gili Yankovitch is a technology leader at Check Point Software, and a former founder and VP of Research and Development at Cimplify (acquired by Check Point).

With billions of connected devices that lack adequate security around them, the Internet of Things (IoT) market represents an extremely promising target in the eyes of cyber criminals. IoT manufacturers are grappling with emerging cyber security regulations and change is happening. However, concerns still abound.

In this dynamic interview, Check Point experts Miri Ofir and Gili Yankovitch discuss what you need to know as we move into 2024. Get insights into IoT exploit techniques, prevention approaches and best practices. Address IoT security issues effectively – starting now.

What does the global threat landscape look like and could you share perspectives around 2024 predictions?

The global threat landscape has been affected by the increasing number of geopolitically motivated cyber attacks. We’re referring to state-sponsored attacks.

Cyber espionage by state-sponsored actors aims to steal intellectual property, gather intelligence, or even lay the groundwork for potential sabotage. Countries like Russia, China, North Korea, and Iran have advanced state-sponsored cyber attack skills, and we can track complicated campaigns affiliated with those countries.

An example of such type of campaign is a supply chain attack. As the name implies, this involves targeting less-secure elements in an organization’s supply chain. The SolarWinds hack from 2020 is a notable example, in which attackers compromised a software update mechanism of a business to infiltrate numerous government and private sector systems across the U.S.

The Internet of Things (IoT) market is highly targeted and prone to supply chain attacks. The rapid proliferation of these devices, often in absence of robust security measures, means a vast expansion of potential vulnerabilities. Malicious actors can exploit IoT weak points to gain unauthorized access, steal data, or launch attacks.

What are IoT device manufacturers’ biggest challenges at the moment?

IoT manufacturers are facing evolving regulation in regards to cyber security obligations. The supply chain concerns and the increasing attacks (41% increase in IoT attacks during Q1 `23 compared to Q1 `22) have led governments to change policies and to better regulate device security. We see two types of programs being rolled out:

1. Mandatory regulations to help manage Software and Hardware Bill of Materials (SBOM) and to verify that products will go to the market with some basic cyber security coverage. SBOMs will help manufacturers get a better understanding of the components inside of their products and maintain them through patches and other mitigations. This will add overhead for manufacturers.

2. Excellent initiatives like the U.S. cyber trust mark and labeling program, which aims to dispel the myth of clarity about privacy and security in the product and to allow educated users to select safer products, among other considerations, like energy efficiency.

While this is an obligation and a burden, it is also a business opportunity for manufacturers. The market is changing in many respects. For example, the U.S. sanctions over China are not only financially motivated; the Americans see China as a national security concern and the new sanctions push major competitors out from the market.

In this vacuum, there is a room for new players. Manufacturers can leverage the changing landscape to gain higher market share by highlighting cyber security in their products as a key differentiator.

What are the most used exploit techniques on IoT devices?

There are several main attack vectors for IoT devices:

1. Weak credentials: Although manufacturers take credentials much more seriously these days than previously (because of knowledge, experience or on account of regulation), weak/leaked credentials still plague the IoT world. This is due to a lot of older devices that are already deployed in the field or due to still easily-cracked passwords. One such example is the famous Mirai botnet that continues to plague the internet in search of devices with known credentials.

2. Command injection: Because IoT devices are usually implemented with a lower-level language (due to performance constraints), developers sometimes take “shortcuts” implementing the devices’ software. These shortcuts are usually commands that interact with system resources such as files, services and utilities that run in parallel to the main application running on the IoT device. An unaware developer can take these shortcuts to provide functionality much faster to the device, while leaving a large security hole that allows attackers to gain complete control. These developer actions can be completed in a “safer” way, but will take longer to implement and change. Command weaknesses can be used as entry points for attackers to exploit vulnerabilities on the device.

3. Vulnerabilities in 3rd party components: Devices aren’t built from scratch by the same vendor. They usually consists of a number of 3rd party libraries, usually open-sourced, that are an integral part of the devices’ software. These software components are actively maintained and researched, therefore new vulnerabilities in them are discovered all the time. However, the rate in which vulnerabilities are discovered is much higher than that of an IoT device software update cycle. This causes devices to remain unpatched for a very long time, even for years; resulting in vulnerable devices with vulnerable components.

Why do IoT devices require prevention and not only detection security controls?

Unlike endpoints and servers, IoT devices are physical devices that can be spread across a large geographical landscape. These are usually fire-and-forget solutions that are monitored live at best or sampled once-a-period, at worst. When attention to these software components is that low, the device needs to be able to protect itself on its own, rather than wait for human interaction. Moreover, attacks on these devices are fairly technical, in contrast to things such as the ransomware that we see on endpoints. Usually, detection security controls will only allow for the operator to reboot the device at best. Instead, prevention takes care of the threat entirely from the system. This way, not only is mitigation immediate, it is also appropriate and reactive, in accordance with each threat and attack it faces.

Why is it important to check the firmware? What are the most common mistakes when it comes to firmware analysis?

The most common security mistakes we find in firmware are usually things that “technically work, so don’t touch them” and so they’ve been left alone for a while. For example, outdated libraries/packages and servers; they all start “growing” CVEs over time. They technically still function, so no one bothers to update them, but many times they’re exposed over the network to a potential attacker, and when the day comes, an outdated server can and will be the point of entry allowing for takeover the machine. A second common thing we see is private keys, exposed in firmware, that are available for download online. Private keys that are supposed to hold some cryptographically strong value – for example, proof that the entity communicating belongs to a certain company. However, they are available for anyone who anonymously downloads the firmware for free. This means they no longer hold a cryptographically strong value.

What are some best practices for automatic firmware analysis?

Best practices for automated assessment – in my opinion, the analysis process is broken into 3 clear steps: Extraction, analysis, report.

A) Extraction: Is a huge, unsolved problem, the elephant in the room. When it comes to extracting firmware, it is not a flawless process. It is important to verify the results, extract any missed items, create custom plugins for unsupported file types, remove duplicates, and to detect failed extractions.

B) Analysis: Proper software design is key. A security expert is often required to assess the risk, impact and likeliness of exploit for a discovered vulnerability. The security posture depends on the setup and working of the IoT device itself.

C) Report: After the analysis completes, you end up with a lot of actionable data. It’s critical to improve the security posture of the device based on action items in the report.

For more insights like this, please sign up for the cybertalk.org newsletter.

5 emerging malware threats, record-breaking malware activity – CyberTalk


Across the past decade, cyber security researchers have observed an alarming 87% surge in malware infections. An estimated 560,000 new pieces of malware are detected daily, and more than 1 billion malware programs are thought to be circulating across the web.

The situation becomes even more disconcerting if we narrow our focus to the current year. In 2023, malware threats have increased by 110% on a quarter-over-quarter basis, reaching 125.7 million inboxes in Q3; a significant increase from 60 million in Q2.

These unsettling trends warrant attention. In essence, current malware levels have surpassed previous thresholds, underscoring the importance of staying informed and vigilant in order to safeguard people, processes and technologies.

Here’s a comprehensive overview of five emerging malware threats, each one more stealthy and insidious than the last.

5 emerging malware threats

1. GootBot. The GootLoader group has developed a new malware variant for command-and-control (C2) and lateral movement —dubbed “GootBot”— that’s been observed in campaigns that leverage SEO-poisoned searches for business documents.

Researchers note that GootBot sends victims to compromised sites that look like legitimate forums. Once there, users are deceived into downloading the initial payload as an archive file.

After infection, large quantities of GootBot implants are disseminated throughout corporate environments. Each implant leverages a different hardcoded C2 server, making the attack difficult to block.

Active since 2014, the Gootloader group often relies on a combination of SEO poisoning and compromised WordPress sites in order to deliver malware.

2. BunnyLoader. This newly observed Malware-as-a-Service tool is under active development. Capabilities are evolving, but generally include keylogging, clipboard monitoring, and remote command execution (RCE).

Any threat actor can purchase a basic version of the BunnyLoader for $250.00 USD on the dark web, while a more sophisticated version of the tool is available at a higher price-point.

At the core of BunnyLoader’s operations is the C2 panel, which oversees an array of nefarious tasks; keylogging, credential harvesting…etc. The C2 panel also offers statistics, client tracking and task management. In turn, the threat actor can closely control and monitor infected machines.

Technical analyses have revealed that BunnyLoader is equipped with persistence mechanisms and anti-sandboxing tactics. The malware uses various techniques to evade analysis and detection.

3. LionTail Malware. In its most recent campaign, a group known as Scarred Manticore has been observed using LionTail; a set of custom loaders and in-memory shellcode payloads.

These do not have any overlap with known malware families, enabling attackers to blend in with legitimate traffic and to remain undetected.

As part of the framework, Check Point discovered that Scarred Manticore deploys the passive backdoor LionTail on Windows servers in order to execute commands via HTTP requests and to run payloads that attackers send to the URLs specified in the malware’s configuration.

The LionTail framework has been used in attacks targeting government, military, telecommunication and financial organizations. These groups have been located in Iraq, Israel, Jordan, Kuwait, Oman, Saudi Arabia and the United Arab Emirates. A regional affiliate of a global non-profit network was also compromised.

This malware is believed to have been developed by nation-state actors and the group that deploys it is primarily focused on data extraction, covert access and other espionage-related activities.

4. SecuriDropper. This Dropper-as-a-Service (DaaS) operation infects mobile Android devices by posing as a legitimate app. In most instances, the app mimics a Google App, an Android update, a video player, a game or even a security app.

Once downloaded, the dropper installs a payload, which is some form of malware. The dropper does this by securing access to the “Read & Write External Storage” and the “Install & Delete Packages” permissions.

A second-stage payload is installed through user deception, as the user is prompted to tap a “Reinstall” button after seeing a fake error message about the app’s installation.

Researchers have observed SpyNote malware distributed through SecuriDropper. In one instance, the entire operation was disguised within an imitation Google Translate app.

In other instances, SecuriDropper was observed distributing banking trojans disguised as the Chrome browser, targeting hundreds of cryptocurrency and e-banking applications.

5. Jupyter infostealer. A wave of new incidents involving a Jupyter infostealer have affected organizations in the education, healthcare and government sectors.

The malware enables hackers to steal credentials and to exfiltrate data. Although this malware has technically existed since 2020, new variants continue to evolve with simple, yet impactful (and unsettling) changes.

In the most recent incidents, the researchers found the infostealer posing as legitimately signed files, using a valid certificate to avoid scrutiny and to enable initial access to a victim’s machine.

Jupyter infections occur via malicious websites, drive-by downloads, and phishing emails. Recently, an online copy of the U.S. government’s budget for 2024 was found to be infected.

Further information

Contending with the amorphous landscape that is malicious software requires a proactive and innovative approach to cyber security.

Remain resilient in the face of relentless malware threats. Ensure that your organization leverages cyber security solutions that provide comprehensive coverage across all threat vectors.

Solutions should encompass a wide spectrum of preventative security layers; from firewalls, to intrusion prevention systems, to advanced endpoint protection. Learn more here.

For further malware reading, click here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

QR code phishing traps (and prevention tips) – CyberTalk

Jeremy Fuchs is the Content Marketing Specialist for Harmony Email & Collaboration. Previously, he worked at Avanan, which was acquired by Check Point in 2021. In another life, he was a sportswriter, spending four years at Sports Illustrated.

Recently, we’ve seen a lot of news about Quishing — or QR Code phishing. This is when the link behind a QR code is malicious, but the QR code itself is not. There was a report of a major U.S. energy firm targeted by a QR phishing code. Other reports have noticed an uptick in these types of attacks.

In fact, Harmony Email researchers have found that nearly all of our customers have been targeted with a QR code-based attack. That coincides with a 587% increase in QR code attacks from August to September.

Why are these trending upward? They seem innocuous enough, just those friendly QR codes that we use to scan menus.

But they are a great way to hide malicious intent. The image can hide a malicious link and if the original image isn’t scanned and parsed, it’ll appear as just a regular image.

And because end-users are accustomed to scanning QR codes, getting one in an email isn’t necessarily a cause for concern.

Below is an example of a typical QR-code phishing attack. In these types of attacks, hackers create a QR code that goes to a credential harvesting page. The “lure” is that the Microsoft MFA is expiring, and you need to re-authenticate.

Though the body says it comes from Microsoft security, the sender’s address comes from a domain that has nothing to do with Microsoft. 

Once the user scans the QR code, they will be redirected to a page that looks like Microsoft’s, but is in fact just a credential harvesting page


Hackers have been using scanned documents in order to hide text for a long time. Historically, a typical attack would work like this: There would be an image with the text and that would bypass some language analysis tools.

To combat that, you needed Optical Character Recognition or OCR. OCR converted images to text to understand them.

Hackers then found another thing to get around that, which was a QR code.

In order to combat these attacks, it’s a little trickier. You need to add the OCR into a capability to detect QR codes, translate them to the URL that hides behind the code and run that through URL analysis tools.

For us, we’ve been protecting against QR code exploits for a number of years and have deployed these protections within just a few days. It’s an example of how we, at Check Point, think philosophically. It’s all about having different tools in order to respond to changes in the attack landscape on a dime. We don’t always know what direction hackers will go in next. But we do have the foundational tools to combat them, from being inline, to wrapping URLs, emulation tools, opening encryption and more.

When an attack vector gains steam, like QR codes, we can look at our deep repository of tools and capabilities to build a solution in no time at all.

For QR codes, we use our QR code analyzer in our OCR engine. It identifies the code, retrieves the URL and then tests it against our other engines. In fact, the existence of a QR code in the email message body is an indicator of an attack. Once OCR converts the image to text, our NLP is then able to identify suspicious language and flag it as phishing.

QR code phishing is the latest trend taking the cyber security world by storm. And it’s only increasing, requiring diligence from end-users and new solutions from vendors.

Want to learn more about QR codes and phishing? Join us for our webinar on November 8th!
RSVP here.

Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

7 actionable security automation best practices – CyberTalk


Nearly 75% of CEOs are concerned about their organizations’ abilities to avert or mitigate a cyber incident. It’s widely known that organizations need to become more resilient and to prioritize continuous delivery capabilities.

In our constantly evolving threat landscape, one key way to build resilience is through cyber security automation. Survey data indicates that more than 40% of organizations see automation as a “major factor” contributing to the successful improvement of their cyber security posture.

Security automation can streamline time-consuming, manual cyber security tasks and offer efficient threat prevention, investigation and incident response mechanisms. Automation also empowers security staff to dedicate time to strategic, higher-level cyber security tasks that otherwise might be sidelined.

Security automation, it’s not that simple…

However, to gain the aforementioned automation advantages, organizations need to adhere to relevant, industry-led best practices. Doing so not only ensures that organizations can harness the full potential of automated cyber security solutions, but also enables staff members to work in a synchronous and symbiotic way with solutions.

These seven actionable security automation best practices below will help your organization integrate the strengths of automation with those of human intelligence; maximizing the opportunities to thrive within complex, high-pressure and precision-centric enterprise ecosystems.

7 actionable security automation best practices

To achieve stronger cyber security outcomes through automation, unpack these savvy practitioner best practices:

1. Optimize the synergy. Automation excels in executing routine tasks. However, humans are still needed to bring in unique insights, contextual understanding and strategic thinking.

It’s the synergy between automation and skilled staff that’s key in an effective, modernized cyber security strategy. Reallocate resources to evolve and rethink human roles and to ensure alignment across ecosystem elements.

2. Commit to team training. Prepare for the shift from manual to automated response by providing team members with comprehensive training that’s tailored to individual roles. Reinforce the technical aspects of new automated solutions and the implications.

Clarify precisely what a security automation solution can handle, and where human intervention is critical. Clear explanations around this can prevent misunderstandings and can ensure that your team knows when to step in.

3. Prioritize automation initiatives. Assess and decide on which security issues are most pressing; map out the priorities. When you have a well-defined set of priorities, develop use-cases and evaluate opportunities for security workflow automation.

In the process, engage relevant stakeholders. Although bringing in a wider working group may slow down efforts, an inclusive approach ensures that key perspectives are heard, resulting in broader consensus and buy-in. This can prevent future roadblocks and resistance to automation adoption.

4. Take a measured approach. When it comes to security automation, most organizations can’t automate everything at once. But this may work to a given organization’s advantage.

Moving forward with automation in high-impact areas provides opportunities to build internal support for it and to showcase the effectiveness of automation tools. The initial results can reaffirm stakeholder buy-in and foster the momentum necessary to further expand automated initiatives.

A measured approach can serve as a foundation for a successful and adaptable automation strategy – one that aligns with your organization’s specific needs and objectives.

5. Create playbooks. Ahead of beginning the workflow automation process, ensure that workflows are as strong and as solid as possible. This will help when optimizing processes.

Then, develop playbooks. These will serve as the foundation upon which your automation efforts will be built.

Playbooks will help set the stage for successful automation process development that’s predicated upon a solid foundation of well-documented and optimized workflows.

6. Plan higher-level projects. Automation presents opportunities for your security team to contribute to your organization at a higher-level. Strategically consider how analysts may be able to redirect efforts into previously overlooked or under-attended value-add areas.

For instance, analysts may be able to spend time uncovering the root causes of persistent types of threats, such as phishing. Proactive investigations of root causes can assist with addressing underlying vulnerabilities. These kinds of activities can significantly contribute to the elevation of an organization’s cyber security posture.

7. Consider security orchestration. Integrating security orchestration alongside security automation enables organizations to seamlessly coordinate complex security workflows across multi-cloud environments. This can improve operational efficiency, communication and collaboration, and can also yield reduced response times.

For more insights into cyber security automation, please see CyberTalk.org’s past coverage or click here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

The latest industry expert AI predictions for 2024


In this highly informative and engaging interview, Check Point expert Sergey Shykevich spills the tea on the trends that he and his threat intelligence team are currently seeing. You’ll get insights into what’s happening with AI and malware, you’ll find out about how nation-state hackers could manipulate generative AI algorithms, and get a broader sense of what to keep an eye on as we move into 2024.

Plus, Sergey also tackles the intellectual brain-teaser that is whether or not AI can express creativity (and the implications for humans). Let’s dive right in:

To help our audience get to know you, would you like to share a bit about your background in threat intelligence?

I’ve been in threat intelligence for 15 years. I’ve spent 10 years in military intelligence (various positions, mostly related to cyber space intelligence) and I’ve been in the private sector for around 6 years.

These last two years have been at Check Point, where I serve as the Threat Intelligence Group Manager for Check Point Research.

Would you like to share a bit about the cyber trends that you’ve seen across this year, especially as they relate to AI?

Yes. We have seen several trends. I would say that there are 3-4 main trends.

  • One trend we see, which is kind of in-flux, is in relation to ransomware ecosystem development. The ecosystem and the threat actors are increasingly operating more like nation-state actors, as they’re becoming very sophisticated.

    To illustrate my point, they now use multi-operation system malware. What does that mean? It means that they not only focus on Windows, but that they’re increasingly focused on Linux.

    This matters because, for many organizations, critical servers are Linux servers. In many cases, the impact of disrupting these servers is much bigger than, say, disrupting the activity of 100 Windows laptops, for instance.

    So, that’s a huge part of what’s happening in terms of ransomware. In addition, we’ve also seen mega ransomware events this year, like the MOVEit hack and use of it for a large-scale supply chain attack.

  • Another trend that we’re seeing is the resurgence of USB infections. When it comes to USBs, many consider it an old technology. A lot of people are no longer using them. And, the infection of USBs goes back to 2012, or even 2010 – with Stuxnet in Iran or the well-known Conficker malware. But what we’re seeing here is an influx in USB infections, as propagated by nation-state actors, like China and Russia, and by everyday cyber criminals.

    Why do we think that we’re seeing a resurgence of USB-based threats? We think that the barriers for hackers in other areas – such as network security and email security – have become much higher. So hackers are trying different methods, like USB infections.

  • We’re also seeing a resurgence of DDoS attacks. Mostly from hacktivist sites. They’re trying to disrupt the functionality of websites.
  • And of course, our team sees all of the threats related to AI. The AI-related threats that we observe are mostly related to phishing, impersonation and deepfakes.

    We do see AI used in malware development, but in terms of AI and malware, we aren’t seeing extremely sophisticated threats or threats that are “better” or more sophisticated than what a good code developer could create.

    In contrast, in relation to phishing and deepfakes, AI allows for a level of sophistication that’s unprecedented. For example, AI allows cyber criminals who don’t know a particular spoken language to craft perfect phishing emails in that language, making the emails sound like they were written by native-speakers.

    I would say that AI will be able to take malware to a new level in the near future, but we’re not there yet.

How can AI be leveraged to counter some of the threats that we’re seeing and that we’ll see into the future?

On the phishing and impersonation side, I think AI is being used and will mostly be used to identify specific patterns or anomalies within email content, which is no easy job for these tools. Most of the phishing content that’s created by AI is pretty good, especially since the data is now pulled directly from the internet (ex. the latest version of ChatGPT). The AI-based solutions can much better identify suspicious attachments and links, and can prevent the attacks in the initial stages.

But of course, the best way to counter AI-based phishing threats, as they exist right now, is still to avoid clicking on links and attachments.

Most cyber criminals aim to get people to take further action – to fill out a form, or to engage in some other activity that helps them. I think that a big thing that AI can do is to identify where a specific phishing email leads to, or what is attached to the email.

Of course, there’s also the possibility of using AI and ML to see what emails a person receives, whether or not they look like phishing emails (based on the typical emails that a person receives on the day-to-day). That’s another possible use-case for AI, but I think that AI is more often used for what I mentioned before; phishing attack assessment.

Could our cyber crime-fighting AI be turned against us?

In theory, yes. I think that this is more of an issue for the big, well-known AI models like ChatGPT — there are a lot of theoretical concerns about how these companies protect their models (or fail to).

There are really two main concerns here. 1) Will unauthorized people have access to our search queries and what we submit? 2) Manipulation — a topic about which there is even more concern than the first. Someone could manipulate a model to provide very biased coverage of a political issue, making the answer or answers one-sided. There are very significant concerns in this regard.

And I think everyone who develops AI or generative AI models that will be widely used needs to protect them from hacking and the like.

We haven’t seen such examples and I don’t have proof that this is happening, but I would assume that big nation state actors, like Russia and China, are exploring methods for how to manipulate AI algorithms.

If I were on their side, I would investigate how to do this because with hacking and changing models, you could influence hundreds of millions of people.

We should definitely think more about how we protect generative AI, from data integrity to user privacy and the rest.

Do you think that AI brings us closer to understanding human intelligence? Can AI be creative?

It’s an interesting set of questions. ChatGPT and Bing now have a variety of different models that can be used. Some of these are defined as ‘strict’ models while others are defined as ‘creative’ models.

I am not sure that it really helps us understand human intelligence. I think that it may put before us more questions than answers. Because I think that, as I mentioned previously, 99.999% of people who are using AI engines don’t really understand how they work.

In short, AI raises more questions and concerns than it does provide understanding about human intelligence and human beings.

For more AI insights from Sergey Shykevich, click here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

Industrial network security management solutions 2021

By Shira Landau, Editor-in-Chief, CyberTalk.org


Industrial control systems deliver water, electricity, fuel and provide other essential services that power millions of enterprises around the world. These systems are susceptible to cyber threats, especially as industry 5.0 increases cyber-physical connectivity. In the recent past, numerous disturbing cases of cyber intrusion have occurred. Industrial network security is mission-critical.

Industrial network security is similar to standard enterprise information system security. However, it does present its own unique challenges. Industrial network security represents a critical business performance indicator. Industrial network security configurations provide insight into business risk exposure, level of corporate competitiveness, and indicate future business continuity, or potential lack thereof.

Systems and networks in industrial control systems (ICSs) retain special features and facets, and are often built on trusted computing platforms with commercial operating systems. Industrial control systems are designed with ‘rugged’ in mind. Most perform reliably for long lengths of time. The typical integrated industrial control system might have a life expectancy that extends for several decades.

The original system designers likely didn’t envision continual cyber-physical security upgrades. But cyber threats are evolving every day. How can industrial network security keep pace?

Industrial network security: An imperative

Improved industrial network security is an imperative. Industrial systems often rely on legacy devices and may run on legacy protocols. These systems were initially developed for long-term use far ahead of the proliferation of internet connectivity, web-based software and real-time enterprise information management portals.

In the early days of industrial networks, information security did not receive much attention. Physical security took priority. Systems were air-gapped, which appeared adequate in terms of cyber security. In the 1990s, as organizations re-engineered business operations and reevaluated operational needs, businesses began to deploy firewalls and other means of blocking attackers. As the years passed, an increasing number of security tactics were tossed into the mix. Nonetheless, industrial network security (INS) needed to play catch-up, and many INS leaders are still doing so today.

Industrial network security: The challenge

International bodies, such as the United Nations, are working to address industrial control system threats. At the same time, industrial organizations must take independent action around cyber security.

One challenge that plagues these systems is that threat defense measures can conflict with core network requirements. To visualize this, consider how CEOs and rank-and-file employees alike often try to skirt cyber security protocols when they slow down productivity. A similar security vs. function tradeoff can occur within industrial system development.

Sophisticated and advanced cyber threats represent a prominent problem for industrial groups. In addition, accidental cyber incidents are a growing concern. For example, an operational system engineer may introduce a network threat during regular technical maintenance.

It’s not just connected networks that are at-risk. Industrial networks that remain disconnected from the internet can still experience cyber intrusions. This can lead to data loss and other untoward business consequences. For instance, a third-party vendor may update systems, but in so doing, connect an unauthorized device that either intentionally or accidentally captures proprietary information.

Industrial network security: The solutions

  • Infrastructure attacks represent imminent threats to industrial groups. Many recent attacks on operational technology (OT) and ICS networks appear based on IT attack vectors, like spear phishing campaigns via email and ransomware on endpoints. Using threat prevention solutions can prevent and eliminate these kinds of attacks before they breach the ICS equipment.
  • An OT engineer may intend to patch systems expeditiously, only to find that the patch is not quick to install, thereby postponing the action, leaving the system unpatched. Operational technology cyber security vendors may be able to offer intrusion prevention systems (IPS) that reduce vulnerabilities through “virtual patching.” This type of solution can protect Windows-based workstations, servers and SCADA equipment.
  • Antivirus and anti-bot technologies can also protect industrial equipment. The software can identify threats before they lead to extreme harm. Malware and bots alike can result in network failures, grinding business operations to a halt.
  • To properly define a security policy, industrial groups must have solutions in place that provide visibility into and understanding of the environment. Visibility means seeing all of the assets within the environment and recognizing what they are and what function they perform. An understanding of granular configurations is also critical.
  • Developing a behavioral baseline for characterization of legitimate traffic can further enhance security. To optimize a security baseline, experts recommend a focus on traffic logging and behavior analysis. Ultimately, organizations should strive for a baseline that can help hunt for threats within the network, detect anomalies and provide other valuable services.

In conclusion

As industry 5.0 evolves, strengthening industrial network security will enable businesses and individuals to operate in safer and more stable environments. The consequences of industrial control system network failures are extreme, and should be avoided at all costs. Avoid being the catalyst of the domino effect by shoring up your organization’s network security.

For more information about industrial network security, click here. Lastly, for more cyber security and business insights, analysis and resources, sign up for the Cyber Talk newsletter.

Generative AI, innovation, creativity & what the future might hold – CyberTalk

Stephen M. Walker II is CEO and Co-founder of Klu, an LLM App Platform. Prior to founding Klu, Stephen held product leadership roles Productboard, Amazon, and Capital One.

Are you excited about empowering organizations to leverage AI for innovative endeavors? So is Stephen M. Walker II, CEO and Co-Founder of the company Klu, whose cutting-edge LLM platform empowers users to customize generative AI systems in accordance with unique organizational needs, resulting in transformative opportunities and potential.

In this interview, Stephen not only discusses his innovative vertical SaaS platform, but also addresses artificial intelligence, generative AI, innovation, creativity and culture more broadly. Want to see where generative AI is headed? Get perspectives that can inform your viewpoint, and help you pave the way for a successful 2024. Stay current. Keep reading.

Please share a bit about the Klu story:

We started Klu after seeing how capable the early versions of OpenAI’s GPT-3 were when it came to common busy-work tasks related to HR and project management. We began building a vertical SaaS product, but needed tools to launch new AI-powered features, experiment with them, track changes, and optimize the functionality as new models became available. Today, Klu is actually our internal tools turned into an app platform for anyone building their own generative features.

What kinds of challenges can Klu help solve for users?

Building an AI-powered feature that connects to an API is pretty easy, but maintaining that over time and understanding what’s working for your users takes months of extra functionality to build out. We make it possible for our users to build their own version of ChatGPT, built on their internal documents or data, in minutes.

What is your vision for the company?

The founding insight that we have is that there’s a lot of busy work that happens in companies and software today. I believe that over the next few years, you will see each company form AI teams, responsible for the internal and external features that automate this busy work away.

I’ll give you a good example for managers: Today, if you’re a senior manager or director, you likely have two layers of employees. During performance management cycles, you have to read feedback for each employee and piece together their strengths and areas for improvement. What if, instead, you received a briefing for each employee with these already synthesized and direct quotes from their peers? Now think about all of the other tasks in business that take several hours and that most people dread. We are building the tools for every company to easily solve this and bring AI into their organization.

Please share a bit about the technology behind the product:

In many ways, Klu is not that different from most other modern digital products. We’re built on cloud providers, use open source frameworks like Nextjs for our app, and have a mix of Typescript and Python services. But with AI, what’s unique is the need to lower latency, manage vector data, and connect to different AI models for different tasks. We built on Supabase using Pgvector to build our own vector storage solution. We support all major LLM providers, but we partnered with Microsoft Azure to build a global network of embedding models (Ada) and generative models (GPT-4), and use Cloudflare edge workers to deliver the fastest experience.

What innovative features or approaches have you introduced to improve user experiences/address industry challenges?

One of the biggest challenges in building AI apps is managing changes to your LLM prompts over time. The smallest changes might break for some users or introduce new and problematic edge cases. We’ve created a system similar to Git in order to track version changes, and we use proprietary AI models to review the changes and alert our customers if they’re making breaking changes. This concept isn’t novel for traditional developers, but I believe we’re the first to bring these concepts to AI engineers.

How does Klu strive to keep LLMs secure?

Cyber security is paramount at Klu. From day one, we created our policies and system monitoring for SOC2 auditors. It’s crucial for us to be a trusted partner for our customers, but it’s also top of mind for many enterprise customers. We also have a data privacy agreement with Azure, which allows us to offer GDPR-compliant versions of the OpenAI models to our customers. And finally, we offer customers the ability to redact PII from prompts so that this data is never sent to third-party models.

Internally we have pentest hackathons to understand where things break and to proactively understand potential threats. We use classic tools like Metasploit and Nmap, but the most interesting results have been finding ways to mitigate unintentional denial of service attacks. We proactively test what happens when we hit endpoints with hundreds of parallel requests per second.

What are your perspectives on the future of LLMs (predictions for 2024)?

This (2024) will be the year for multi-modal frontier models. A frontier model is just a foundational model that is leading the state of the art for what is possible. OpenAI will roll out GPT-4 Vision API access later this year and we anticipate this exploding in usage next year, along with competitive offerings from other leading AI labs. If you want to preview what will be possible, ChatGPT Pro and Enterprise customers have access to this feature in the app today.

Early this year, I heard leaders worried about hallucinations, privacy, and cost. At Klu and across the LLM industry, we found solutions for this and we continue to see a trend of LLMs becoming cheaper and more capable each year. I always talk to our customers about not letting these stop your innovation today. Start small, and find the value you can bring to your customers. Find out if you have hallucination issues, and if you do, work on prompt engineering, retrieval, and fine-tuning with your data to reduce this. You can test these new innovations with engaged customers that are ok with beta features, but will greatly benefit from what you are offering them. Once you have found market fit, you have many options for improving privacy and reducing costs at scale – but I would not worry about that in the beginning, it’s premature optimization.

LLMs introduce a new capability into the product portfolio, but it’s also an additional system to manage, monitor, and secure. Unlike other software in your portfolio, LLMs are not deterministic, and this is a mindset shift for everyone. The most important thing for CSOs is to have a strategy for enabling their organization’s innovation. Just like any other software system, we are starting to see the equivalent of buffer exploits, and expect that these systems will need to be monitored and secured if connected to data that is more important than help documentation.

Your thoughts on LLMs, AI and creativity?

Personally, I’ve had so much fun with GenAI, including image, video, and audio models. I think the best way to think about this is that the models are better than the average person. For me, I’m below average at drawing or creating animations, but I’m above average when it comes to writing. This means I can have creative ideas for an image, the model will bring these to life in seconds, and I am very impressed. But for writing, I’m often frustrated with the boring ideas, although it helps me find blind spots in my overall narrative. The reason for this is that LLMs are just bundles of math finding the most probable answer to the prompt. Human creativity —from the arts, to business, to science— typically comes from the novel combinations of ideas, something that is very difficult for LLMs to do today. I believe the best way to think about this is that the employees who adopt AI will be more productive and creative— the LLM removes their potential weaknesses, and works like a sparring partner when brainstorming.

You and Sam Altman agree on the idea of rethinking the global economy. Say more?

Generative AI greatly changes worker productivity, including the full automation of many tasks that you would typically hire more people to handle as a business scales. The easiest way to think about this is to look at what tasks or jobs a company currently outsources to agencies or vendors, especially ones in developing nations where skill requirements and costs are lower. Over this coming decade you will see work that used to be outsourced to global labor markets move to AI and move under the supervision of employees at an organization’s HQ.

As the models improve, workers will become more productive, meaning that businesses will need fewer employees performing the same tasks. Solo entrepreneurs and small businesses have the most to gain from these technologies, as they will enable them to stay smaller and leaner for longer, while still growing revenue. For large, white-collar organizations, the idea of measuring management impact by the number of employees under a manager’s span of control will quickly become outdated.

While I remain optimistic about these changes and the new opportunities that generative AI will unlock, it does represent a large change to the global economy. Klu met with UK officials last week to discuss AI Safety and I believe the countries investing in education, immigration, and infrastructure policy today will be best suited to contend with these coming changes. This won’t happen overnight, but if we face these changes head on, we can help transition the economy smoothly.

Is there anything else that you would like to share with the CyberTalk.org audience?

Expect to see more security news regarding LLMs. These systems are like any other software and I anticipate both poorly built software and bad actors who want to exploit these systems. The two exploits that I track closely are very similar to buffer overflows. One enables an attacker to potentially bypass and hijack that prompt sent to an LLM, the other bypasses the model’s alignment tuning, which prevents it from answering questions like, “how can I build a bomb?” We’ve also seen projects like GPT4All leak API keys to give people free access to paid LLM APIs. These leaks typically come from the keys being stored in the front-end or local cache, which is a security risk completely unrelated to AI or LLMs.