Cybersecurity: Crime and Espionage
Learning from the Attack against Sony
On June 26, 2014, the BBC reported that North Korea threatened war against the United States if a Sony-produced movie (“The Interview”) was released. On November 24, 2014, Sony Pictures Entertainment was the victim of a cyberattack that compromised unreleased films, private correspondence, and other sensitive information. A group calling itself Guardians of Peace (GOP) takes credit for the attack, and North Korea officially denies involvement, though it says that hack may be the “righteous” work of its supporters. On November 28, media reports first surface that the attack on Sony was the result of North Korean displeasure about the forthcoming release of the movie. In the wake of GOP threats of violence, Sony cancels theater release of The Interview in mid-December. Facing substantial criticism, Sony reverses this decision a week later.
It’s premature to conclude that the Sony saga is over, but since the fury seems to have calmed down a bit, it seems appropriate to consider what lessons so far should be highlighted for policy makers. Here are some of them.
Theft of intellectual property for profit is not the only possible outcome of a hack. Reinforcing lessons learned from destructive cyberattacks against Aramco and South Korean banks, the perpetrators wiped computer systems at Sony clean, rendering them largely inoperable and crippling business operations. They also obtained and released sensitive and embarrassing correspondence and some unreleased films. None of these actions appear to have financially benefited any party.
Attribution of a hostile cyber operation by responsible government authorities is uncertain and slow. On December 9, assistant FBI director (cyber division) Joseph Demarest said regarding the Sony incident that “there is no attribution to North Korea at this point." On December 19, the FBI released a statement saying that “As a result of our investigation, and in close collaboration with other U.S. government departments and agencies, the FBI now has enough information to conclude that the North Korean government is responsible for these actions.” The FBI reiterated its conclusion on January 7, with Director James Comey saying “I have every confidence about this attribution, as does the entire intelligence community.” That is, it took several weeks—not hours, not days—for the U.S. government to ascertain North Korea’s responsibility. Note also that the government’s position changed at least once—not a surprising outcome in the uncertain environment of cyberspace. In the absence of prior evidence and other sources, attributing responsibility is likely to take much longer.
Government statements about attribution will face skeptics who have a significant public voice. A few days after the initial FBI’s attribution of the attack on Sony to North Korea, Marc Rogers, a security researcher for the mobile security company, Cloudflare, said that “calling out a foreign nation over a cybercrime of this magnitude should never have been undertaken on such weak evidence. The evidence used to attribute a nation state in such a case should be solid enough that it would be both admissible and effective in a court of law. As it stands, I do not believe we are anywhere close to meeting that standard.” This analysis and others like it received a great deal of media attention, and thus, for much of the public, authoritative attribution has not yet been established. (In a satirical posting about the attribution saga, a number of pranksters developed a Sony Hack Attribution Generator that allows a user to display a seemingly authoritative page with “evidence” to support a variety of scenarios.)
Much of the public—and the public commentators who help to shape public opinion—do not hold the national security decision making process in high regard. For example, they do not understand that intelligence is often a matter of gathering together a number of weak strands of evidence, all of which point in the same direction, and drawing a more confident conclusion that any individual piece of evidence would otherwise warrant. (An individual coin flip may turn up heads or tails and thus provide no information at all about whether the coin is fair, but 10 heads in a row is pretty substantial evidence that it is not.) Furthermore, government authorities often do have evidence that is not available in public. For example, the U.S. government may have a monitored a telephone call between senior North Korean officials discussing North Korean involvement in the attack. A trustworthy and reliable U.S. ally may have passed along information about North Korean involvement obtained from a source in Kim Jong-Un’s office. (According to a story in the New York Times on January 18, information provided by unnamed former U.S. and foreign officials indicates that evidence of this type was critical in the U.S. government case against North Korea.) Such information would almost never be publicly acknowledged by the U.S. government, and yet might be quite useful in attributing an attack to its source. It is this misunderstanding that leads to the sentiment—expressed by more than one analyst—that only evidence that would stand up in a “court of law” should be used to make such determinations, and that evidence that does not meet this standard should be excluded from consideration.
Excessive rhetoric will accompany a cyber attack that has not been seen before. A few weeks after the event, former Speaker of the House Newt Gingrich later called this event “pure cyberwarfare.” Three days later, Senator John McCain called it “a manifestation of a new form of warfare." But those following cyber security realize that there is a long history of calling anything bad happening in cyberspace a form of cyberwar. Stealing intellectual property through the Internet, recruiting terrorists, and defacing websites have all been called cyberwar. All of these things are bad, but such rhetorical excesses trivialize the real meaning of war.
Doing the minimum for cybersecurity is not enough. In 2005, the executive director of information security at Sony said regarding corporate cybersecurity efforts that “We’re trying to remain profitable for our shareholders, and we literally could go broke trying to cover for everything. So, you make risk-based decisions: What’re the most important things that are absolutely required by law?” It may indeed be that Sony was in technical compliance with all relevant regulations, but even if it was, the lesson is that minimum compliance is not enough. Indeed, assistant FBI director Joseph Demarest asserted that the attack on Sony would not have been blocked by 90 percent of private sector defenses now in place.
When attacked, some companies will take offensive actions in cyberspace to limit the damage. To obstruct the efforts of members of the public trying to access films and other files exfiltrated from Sony Pictures Entertainment and posted on Bittorrent file-sharing sites. According to a story in ArsTechnica, Sony planted a large number of bogus files whose signatures matched those of the exfiltrated files, and anyone seeking the exfiltrated files had a high probability of downloading a bogus file instead. Because of the way the Bittorrent protocol works, every user who downloaded a bogus file indicated to the Bittorrent community that the bogus file was authentic, thus leading other users to download the same bogus file. And because many people were trying to downloading the real files, the effect was that the relevant servers were flooded with requests for bogus files. In effect, the Sony action tricked unwitting users into mounting a denial-of-service attack against the Bittorrent servers.
With these lessons in mind, and recalling the entire episode to date, what do we need to do in response? Part of the challenge in figuring out what to do next is that we don’t have a clear understanding of how old rules should be interpreted in light of new technology. So what might provide some guidance?
One guideline is that a cyberattack producing effects that would be comparable to those made by a kinetic attack should be judged in the same way. That is, if a cyberattack against an electric power generation facility produces the same damage that a cruise missile could create, we judge the cyberattack in the same way that we would judge a cruise missile attack. Since no one was killed as the result of the cyberattack, and no irreversible damage was done (deleted data can be restored from backups), the Sony incident doesn’t qualify.
What about serious damage that could be caused only by cyber means? An electronic takedown of the stock market? A cyberattack with economic effects similar to the financial crisis in 2007? A foreign cyberattack on electronic voting machines that thwarts the will of the citizenry? Of course. Such actions would surely count as acts of war. But the Sony incident doesn’t qualify here either. Sony Pictures Entertainment is an important company, but no one seriously argues that the nation’s economy will collapse even if Sony goes under (which it will not).
These observations hardly mean that the United States should sit back and do nothing. Even if the Sony hack is not an act of war, an appropriate response is necessary. What nearly all U.S. commentators to date have agreed on to date is that if the attack on Sony can really be attributed to North Korea (a point that North Korea disputes), going without a response would set a precedent that will simply invite similar troubles in the future.
A useful response has two parts—addressing the North Korean provocation and getting our own house in order. On the first, the Obama administration is right to seek a response that is aggressive enough for the North Korean leaders will notice and care about but not so provocative as to lead to an escalating cycle of action and reaction in cyberspace. Such a cycle could indeed lead to serious hostilities, not just with cyber weapons but with other things that go boom as well.
On December 22, 2014 and again on December 23, the North Korean internet came under attack severe enough to disconnect it from the rest of the Internet. The U.S. government declined to take responsibility for that attack, and the parties responsible for that attack remain unknown at this time. But if the U.S. government was responsible for this attack, it may meet the notice-but-not-provoke criterion. In any event, U.S. responses may not yet be complete, and only time will reveal their full scope.
Getting our own house in order goes beyond the Sony incident. It seems to be of less interest to political rhetoricians, but it would involve at least four steps.
First, we would need to decide what evidence we would be willing to release to defend our assertion that a particular party had attacked us in cyberspace. We often know a lot more about attribution of cyberattacks than we let on, but the days of blind trust in government assertions are long gone. There is always a tension between releasing information to persuade a skeptical world and alerting an adversary about methods we may have used to obtain that information—but resolving that tension reasonably is why we elect our national leaders.
We also need to decide what thresholds we are willing to observe. If we call the North Korean attack on Sony an unjustified act of war against the United States, are we willing to forego similar attacks on foreign corporations whose actions go against U.S. interests?
Companies need to plan for how they will protect their most critical assets against cyber intrusions. The attack on Sony was not particularly sophisticated from a technical perspective—what was new was the fact of malicious destruction, where in the past “only” theft of intellectual property had taken place. In the future, corporate recovery plans will have to anticipate the possibility of rebuilding computer systems and data structures as well as responding to release of sensitive data.
But perhaps the most important lesson of the Sony affair to date is that we learn from our surprise. Few policy makers had anticipated the possibility of a direct destructive cyber assault of a nation state against an individual corporation that was not part of the nation’s critical infrastructure. What we learn is that cyberattacks can occur in entirely unexpected contexts, and the claim of “that would never happen” is not an adequate basis for either corporate or policy planning.