Thoughts on the NotPetya Ransomware Attack

By Nicholas Weaver
Wednesday, June 28, 2017, 4:16 PM

Worms, malicious computer programs that spread from computer to computer throughout the network—are perhaps the most devastating delivery mechanism for an electronic attack. Able to spread throughout an entire institution (or even across the entire planet) in a matter of minutes, they represent the most effective way for a bad actor to deliver a malicious payload to as many computers as possible. A worm can do its damage faster than humans can react.

The most recent ransomware attack, which spread across Europe, the United States, and Asia yesterday, represents a chilling evolution in the worm-as-weapon. This is the second ransomware attack in two months, following the WannaCry attack that spread across the globe in May. Beginning in Ukraine, this new worm shut down much of the Ukrainian government, the Danish shipping conglomerate Maersk, and others. And there is a significant possibility that this wasn’t an attempt for ransom but a sophisticated attack launched against Ukrainian interests.

The worm has sometimes been referred to as Petya, based on the ransomware module used in the attack, but is really best described as NotPetya, since the Petya ransomware is simply a small component of a larger attack. This is a very sophisticated worm, which doesn’t just use one mechanism for spreading, but combines three separate techniques. The first, using NSA tools released by ShadowBrokers, is similar to how WannaCry operated. But the other two mechanisms are more pernicious, leveraging Windows network administration and privileges to spread. So if NotPetya found itself running in a privileged account on a typical workstation, it could rapidly spread to other machines in the infected network. One observer reported that it could compromise 5000 machines in ten minutes.

Here’s how it works. Although self-spreading, NotPetya does not generally spread outside of a single corporate network: it pretty much only bridges networks when computers move between them. So the attackers responsible used another clever way in. Rather than hacking the target networks, the attackers compromised the update service of MeDoc, a Ukrainian program used for accounting and tax purposes, with tight integration into the Ukrainian business tax workflow. (Imagine the business equivalent of Quicken/TurboTax in the U.S.) Most businesses that have to pay Ukrainian taxes will be running a copy somewhere in their corporate network. By replacing the normal update with an installer for the NotPetya worm, every time a copy of the MeDoc program checked for updates (a process that happens automatically) the computer would become infected with NotPetya. After establishing this beachhead, NotPetya could then spread throughout a corporate network.

The leveraging of privileges and administration tools is devastating. Even an institution which is 100 percent patched and running the latest Windows 10 operating system could become completely infected if NotPetya started running on the wrong computer. We don’t know which fraction of infections were due to a failure to patch and which fraction were due to the worm abusing privileges, but I would suspect the latter was actually more important for its rapid and effective spread.

This solid design suggests not just good development but good testing. Building a worm is not hard per se: it’s simply a matter of coupling a remote exploit to a program that both searches for vulnerable systems and, after exploitation, copies itself onto the new victim. But it takes significant effort to make sure that a worm works reliably, especially on systems as diverse as Windows. Unlike normal software development, the programmer can’t just simply run the program, find a bug, fix it, and repeat. Worms need to be carefully tested in isolated networks, since even a poorly engineered worm leaves the attacker’s control if it gets loose on the Internet. Given the difficulty in testing, doing a worm right also requires programmers who don’t write many bugs.

The problem is further magnified when testing a worm with a malicious payload. NotPetya not only spreads using multiple mechanisms, but spreads reliably and apparently without major bugs. It also contains an overtly malicious payload that renders unusable the host computer, in ten minutes to an hour, yet doesn’t generally impede the worm’s spread. Killing the host is always a risk for a pathogen as a dead host can’t spread a plague further. So a malicious payload like this needs to be tuned: fast enough to limit opportunities for human response, but slow enough that it doesn’t inhibit the spread.

This speaks to a rigorous development process. If I had been in charge of building NotPetya, I’d budget for two to three good programmers, a month’s time, and a small but diverse isolated network for testing. I’d probably also need one additional person to keep rebuilding the isolated network after each test. In other words, it wouldn’t be something I’d do in a basement.

Yet for all the sophistication, the ransomware payload is politely described as a fecal theater piece. Ransomware needs a mechanism to contact the operator, but rather than using Tor hidden services, the authors used just a single email address that was quickly shut down, eliminating any way to contact the bad actors or obtain decryption keys. If a high profile ransomware operator can’t release computers in return for payment, they will find that payments cease. Ransomware works because the bad guys (try to) keep their word.

Ransomware also needs a payment mechanism that is hard to trace, such as a per-victim Bitcoin wallet, but NotPetya uses a single Bitcoin address. So not only can we witness that they’ve only gained roughly ten thousand dollars, the nature of Bitcoin ensures that these miscreants will have a huge difficulty even getting their paltry earnings without being traced.

Ransomware also needs to resist a cryptanalysis attack attempting to unlock the encrypted files. Yet NotPetya uses 800b RSA, a key length small enough that it is easily in the reach of NSA cryptanalysis and possibly within the reach of an enthusiastic private group as well.

Finally, ransomware needs a useable interface. Good ransomware will unlock the victim’s computer at a click of a button after the attacker is satisfied. In order to offer this functionality, the ransomware needs to keep the computer running while it encrypts the victim’s data.

But this particular ransomware can’t do that. When it infects a computer, it reboots the machine and then encrypts the files, rendering the computer completely unusable until the victim either reinstalls the computer or pays the ransom. This would make it nearly impossible to pay the ransom in practice unless you have access to another computer: how do you contact the ransomware provider when your computer is completely unusable? And payment is not easy, as it requires the user to copy out a complex string of characters, without error, and email it to the operator. So even if the operator’s email account worked perfectly, odds are good the ransomware operator couldn’t unlock the victim’s computer.

Failed ransomware would be acceptable if this was new technology and new ransomware. But the attackers integrated a range of tools, including the ShadowBrokers NSA toolkit, the mimikatz tool for extracting Windows authentication tokens, and the Petya ransomware, among other pieces. They could have just as easily selected a much better ransomware payload, one that would actually ensure they could collect their money.

The only way the Petya payload is superior to other ransomware is that it does disable the computer. If you wanted to deploy profitable ransomware to thousands of computers this is a horrid choice. On the other hand, if you want to deploy a payload that renders thousands of computers unusable but looks like ransomware, this is perhaps the best candidate possible. There is now additional evidence that suggests it was deliberately modified to render computers unusable rather than collect ransom.

This leaves us with two likely possibilities. Either NotPetya was written by a group of criminals who showed great sophistication in their development process, wrote an excellent worm, and screwed up horribly on the one part that matters for the criminals to gain anything. Or the worm was written by an actor who showed great sophistication in the development process, wrote an excellent worm, and used it to launch a malicious payload targeted at both the Ukrainian government and all businesses who pay Ukrainian taxes.

In many ways, this may even be a sequel to the CrashOverride power grid attack. CrashOverride looked like the test of a payload. NotPetya could easily be both an attack on the Ukrainian government and those doing business in the country and a live test of a delivery system. If so, the worm’s incidental infection of non-Ukranian targets, notably Rosneft and other Russian targets, should act as a reminder that fast self-propagating code risks a huge amount of collateral damage.

At the moment, we don’t yet know whether the worm was developed as a criminal enterprise by hackers who failed to develop a working payment system or as an intentional attack on Ukraine disguised as criminal activity. When all this is done, I think the NSA would be well served by making their own internal assessment of whoever is behind NotPetya public. They don’t need to disclose sources and methods but simply the conclusion: “With X confidence, we believe NotPetya was authored by Y." Either way, we would benefit from some attribution outside this guesswork and logic.

Even if it NotPetya does turn out to have been criminal activity, we should consider the lessons it can teach us when it comes to developing defenses. With a few changes, you could easily use NotPetya as the delivery mechanism for a worm designed to black out the U.S.. And given NotPetya’s abuse of administrative privileges, this attack could very well work against operators who are religious about patching their systems.

Experts have long worried about the potential impacts of high-speed worms, but the return of the worm to prominence, whether it is for criminals looking for profit or nation-states looking to test attacks, should give us pause.