In March, the Justice Department unsealed an indictment against seven Iranians for hacking the U.S. financial sector and a dam in New York. Debate ensued between supporters and skeptics of using indictments to hold hackers accountable. Supporters like FBI Director James Comey believe they can discourage hackers with the message that “The FBI will find those behind cyber intrusions and hold them accountable — wherever they are, and whoever they are.” Skeptics, like Fred Kagan, argue that unless the U.S. imposes more meaningful consequences, “just naming them gives them street cred in Tehran.”
Lost in this debate, however, is what we can learn from this episode of Iranian hacking to protect ourselves in the future. Fortunately, the indictment reveals a clue for how to do so. In 2012 and 2013, several Iranian hackers overloaded the websites of major U.S. banks with extra traffic from the Internet. To accomplish this, these hackers scanned the Internet and identified computers and servers running “software that had not been updated to address certain known security vulnerabilities.” With this line, the Justice Department clarified what information security officials have argued for years: that the vast majority of hacks exploit known vulnerabilities. (We know less about how one of the hackers gained access to a SCADA system that operated a dam in Rye, New York.)
This is a powerful statistic: 3/4 of hacking incidents occur through means that we know about and therefore have the opportunity to fix. We don’t know why the software at issue in the Iranian hacking case wasn’t fixed—if a patch was not issued or if it was not applied—but had the software in question been patched, it would have closed off at least this avenue for these Iranians to hack U.S. banks.
What can we do to increase the chances that known vulnerabilities don’t lead to the next big hack?
First, we must treat the security of code as a first-order priority, not a second-order afterthought. Noted cybersecurity researcher Peiter Zatko (aka Mudge) is trying to help consumers and businesses do just that. His Cyber Independent Testing Laboratory—essentially a cybersecurity version of Consumer Reports—will test, quantify, and compare the security and resilience of software against exploitation so buyers can make decisions based on explicit knowledge about the security of software. If software is riddled with vulnerabilities and generally has poor security hygiene, it should get a low score and potential purchasers would be aware before they purchased the product.
Second, we must restructure how we build community knowledge of vulnerabilities. For years, the non-profit MITRE Corporation has managed the Common Vulnerabilities and Exposures database, which tracks known vulnerabilities. But the number of vulnerabilities has grown dramatically, putting too much stress on a system never designed to keep pace with this much growth. As the Internet of Things brings software to even more appliances and devices, known vulnerabilities will keep growing.
One final step is for individuals, companies, and governments to focus on securing known vulnerabilities in their systems. We can only demand more secure software and up-to-date patching if we are equally committed to applying these more secure programs to our systems. For individuals, this is often a no-brainer: our iPhones and Androids can be configured to update automatically when developers release new (and hopefully more secure) versions of their applications. Businesses and governments face different challenges, as updating to more secure software can render legacy programs inoperable. But this is more of an economic tradeoff between convenience and security than a true technical challenge.
Steps like these need not be groundbreaking to make life much harder for our adversaries. With more focus on reducing the known vulnerabilities in the software on which we increasingly rely, we can reduce the chances that future gangs like the recently-indicted Iranian hackers will be successful.