You can’t afford to be reactive with security any more. Instead of waiting until you notice an attack, you need to assume that you’re vulnerable and have already been attacked. “Assume breach” is a security principle that says you should act as if all your resources—applications, networks, identities and services both internal and external—are insecure and have already compromised, and you just don’t know it.
One way of finding out is to use “deception technologies”: Decoy resources in strategic parts of your network with extra monitoring that you can fool attackers to go after—keeping them out of your real systems and making them reveal themselves as they’re sniffing around.
Setting a trap to expose cyberattackers
“Adversaries often start ‘in the dark’ after a successful compromise, unsure about exactly what systems they may have access to, what they do and how these are connected to different parts of an organization. It’s during this recon phase that an adversary is most likely to reach out or probe other services and systems,” Ross Bevington, principal security researcher in the Microsoft Threat Intelligence Center, told TechRepublic.
That’s where deception technology like honeypots (infrastructure that looks like a real server or database but isn’t running a live workload), honeytokens (decoy objects in real workloads you’re already running) and others come in. “By representing itself as systems or services an attacker is interested in, but are not actually used in any business processes, high fidelity detection logic can be constructed that alerts the security team to post compromise activity,” Bevington said.
Deception technology works best when it is difficult to remotely tell the difference between a real system or something that is fake, he explained: That way, the attacker wastes time on the decoy.
Plus, you now know the attacker is there. Because there’s no legitimate reason to access those resources, anyone who tries is clearly unfamiliar with your system. It might be a new hire who needs training (also useful to know), but it might be an attacker.
You can use deception as intrusion detection, like a tripwire, or you can deliberately expose it (which Microsoft itself does) “…as a way of collecting threat intelligence on what adversaries may be doing pre-compromise,” he said.
“Either way the goal of deception technology is to significantly increase the costs for the attacker whilst reducing that of the defender,” said Bevington.
Some deception techniques take more work. “Many customers take steps to customise their lures, decoys and traps to their ways of working,” Bevington told us.
But running extra infrastructure does take time and incur costs. You also have to make it look like a legitimate workload without copying over any sensitive information, otherwise the attacker will know it’s a fake. And the security team running a honeypot doesn’t always know what real-life workloads look like the way admins and operations teams do—but so far, software engineering teams haven’t had many tools to set these kind of traps (even as though the “shift left” philosophy of devops means they’re more involved in security).
SEE: Mobile device security policy (TechRepublic Premium)
Enter honeytokens: Fake tokens you plant in your existing workloads with legitimate looking names that match your real resources. They’re cheap and easy to deploy, can cover as many workloads as you’re running and they’re low maintenance. Once they are set up, they can generally be left for months or years without additional effort to maintain them, Bevington says. “Tokens are now being used more frequently as a low cost, high signal way of catching a full range of adversaries.”
The downside is that you don’t get a deep understanding of who an adversary is or what they are trying to do when they trip a honeytoken; a honeypot gives a security team more information about the attacker.
Which you need depends on your threat model, Bevington points out. “Honeypots have the potential to give defenders significant amounts of threat intelligence on who the attacker is and what they want to achieve, but with higher costs because honeypots require CPU and memory and are either installed on a machine or virtual machine and require ongoing attention to maintain.” Many organizations don’t need that extra information and may feel like tokens are enough.
SEE: Password breach: Why pop culture and passwords don’t mix (free PDF) (TechRepublic)
Honeytokens made easy
Microsoft has been using deception techniques for quite some time, because so many attackers try to get into Microsoft services and customer accounts (this is part of what Microsoft calls its “sensor network”). “We’ve seen great value in embedding technology like tokens and honeypots into our internal security posture,” Bevington said. That deception data has helped Microsoft analysts find new threats against Windows, Linux and IoT devices. Exposing an open Docker API server found attackers who used the Weave Scope monitoring framework to compromise containers, and other deception technologies revealed how IoT like Mozi and Trickbot attack IoT devices.
Once it uncovers the ways attackers compromise infrastructure, Microsoft can add protections in its Defender services for those specific attacks. It’s also been making deception data available to researchers looking for ways to automate processing that data for detection.
But with the new Microsoft Sentinel Deception (Honey Tokens) solution for planting decoy keys and secrets in Azure Key Vault, you don’t have to be a security expert to run deception technologies. “One of the goals of Sentinel and our recently released Azure Key Vault token preview is to reduce the complexity of deploying these solutions so that any organization with an interest in this technology can deploy it easily and securely,” Bevington said.
It includes analytics rules to monitor honeytoken activity (including an attacker trying to turn off that monitoring) and workbooks for deploying honeytokens (as well as recommendations in Azure Security Center) and investigating honeytoken incidents. Honeytokens get names based on your existing keys and secrets and you can use the same keyword prefixes you use for your real tokens.
It might seem counterintuitive to effectively invite attackers into a service as important as Azure Key Vault, but you’re really just finding out if you have correctly secured the service with options like managed identity. With honeytokens that pretend to be secrets and access credentials, “the keys are such a significant reward to an adversary that they may spend significant resources trying to access this data,” Bevington pointed out. It’s important to put in place basic security hygiene processes and practices like MFA and passwordless authentication—and to make sure you monitor any alerts for your honeytokens or other deception technologies closely.
Think of this as another layer in your defenses. Alongside deceiving real attackers into going after fake resources, you can also see what a real attack would be like, for example simulating denial of service attacks on resources you protect with Azure services using services like Red Button or BreakingPoint Cloud. Try exploring your own systems with Red Team tools like Stormspotter that show you what resources in your Azure subscriptions are visible, so you know what an attacker would see as they start looking around.
Using what you learn about how attackers behave from deception techniques to protect your real resources can help you stay a step ahead.