The government has failed to deliver on its promises of greater transparency.
Latest in Vulnerability Equities Process (VEP)
The Biden administration has an important opportunity to rebuild and sustain trust in the software ecosystem by reforming the government vulnerability disclosure process into a more transparent and frequently used system.
In the U.S. there has been a long debate about “vulnerability equities”—that is, whether the government should disclose a vulnerability it discovers to the vendor, which will then allow users to apply a patch and be defended against exploitation, or keep the vulnerability secret to enable the government’s exploitation of targets. There is little data on how the process works. But the U.S. has the potential to learn how the British handle the same problem.
The public release of the Vulnerability Equities Process (VEP) charter by the White House in late 2017 went a long way toward satisfying the public’s curiosity about the secretive, high-profile and contentious process by which the U.S. government decides whether to temporarily withhold or publicly disclose zero-day software vulnerabilities—that is, vulnerabilities for which no patches exist. Just recently, the U.K.
As governments increasingly find themselves needing information from networked sources for law enforcement, intelligence, and military purposes, one of the most difficult dilemmas they face concerns the use of so-called zero day vulnerabilities—previously unknown flaws or bugs that can sometimes be exploited to gain access to servers that house information or control networks and infrastructure. Governments often have researchers looking for these flaws, and sometimes, governments purchase them on the open market.
When the government discovers a bug in any computer hardware or software system, should it immediately inform the device or software manufacturer, so the company can create a patch and protect its customers’ cybersecurity? When should the government be permitted to keep the information to itself, and exploit the vulnerability to hack into devices in support of law enforcement and intelligence agency operations?
The Government Accountability Office last week published a report that, among other things, weighs in on the pros and cons of the NSA/CYBERCOM “dual-hat” system (pursuant to which the director of the NSA/CSS and commander of CYBERCOM are the same person). The report deserves attention but also some criticism and context. Here’s a bit of all three.
1. What is the “dual-hat” issue?
We recently published a paper on the rediscovery of software vulnerabilities. This was the final version of a paper that had been in the works since September, peer-reviewed by the WEIS community during the winter, and then circulated for additional revision in early March. Since publication, two mistakes have come to light.
Software and computer systems are a standard target of intelligence collection in an age where everything from your phone to your sneakers has been turned into a connected computing device. A modern government intelligence organization must maintain access to some software vulnerabilities into order to target these devices. However, the WannaCry ransomware and NotPetya attacks have called attention to the perennial flipside of this issue—the same vulnerabilities that the U.S. government uses to conduct this targeting can also be exploited by malicious actors if they go unpatched.