On Tuesday, January 27, 2015, the Federal Trade Commission released a staff report on cybersecurity and the Internet of Things.
Although as a staff report, the report has no binding authority on anyone, and the report merely stated that “commission staff encourages companies to consider adopting the best practices highlighted by workshop participants,” it was predictable that opposing voices were heard noting that excessive regulation could smother innovation and scare customers away from a promising new technology. (See this, for example.)
The best practices mentioned include building security into devices at the outset, rather than as an afterthought; training all employees about good security, and ensuring that security issues are addressed at the appropriate level of responsibility within the organization; using service providers that are capable of maintaining reasonable security and provide reasonable oversight for these service providers; implementing a defense-in-depth approach to security; limiting the ability of unauthorized persons to access a consumer’s device, data, or even the consumer’s network; and monitor products throughout their life cycle and, to the extent feasible, patching known vulnerabilities.
These practices are hardly radical—indeed, many vendors of IT-related goods and services practice them today. But the resistance towards even a nonbinding staff report recommending things that vendors are doing today underscores a fundamental observation about cybersecurity in the United States—we’ve seen this play before.
A discussion from a recent National Research Council report (of which I was an editor) provides the script for the first act:
In information technology (as in other fields), vendors have significant financial incentives to gain a first-mover or a first-to-market advantage. For example, the vendor of a useful product or service that is first to market has a virtual monopoly on the offering, at least until a competitor comes along. During this period, the vendor has the chance to establish relationships with customers and to build loyalty, making it more difficult for a competitor to establish itself. Furthermore, customers of the initial product or service may well be reluctant to incur the costs of switching to a competitor.
Policy actions that detract from the ability of the private sector to innovate are inherently suspect from this perspective, and in particular policy actions to promote greater attention to cybersecurity in the private sector often run up against concerns that these actions will reduce innovation. The logic of reducing time to market for information technology products or services runs counter to enhancing security, which adds complexity, time, and cost in design and testing while being hard to value by customers. For example, the real-world software development environment is not conducive to focusing on security from the outset. Software developers often experience false starts, and many “first-try” artifacts are thrown away. In this environment, it makes very little sense to invest up front in that kind of adherence unless such adherence is relatively inexpensive.
The fact of the matter is that as a nation we want better cybersecurity, yes, but we also want innovation as well. And until someone explains how it is possible to bring to market more secure products in the same amount of time and for the same cost as today, we will be stuck with this tradeoff.
After the first act is an interlude, in which people minimize the security risks and/or their significance. “Why would anyone ever want to do that?” is the mantra of this interlude.
The next real act comes when the technology is well on its way to widespread adoption because it makes life better for many people—and something bad happens as the result of a hacking incident. The reaction will then take one of two forms: “that was a fluke and the precise set of circumstances enabling that hack won’t occur again” or “the technology is too widespread and thus it is too expensive or impractical to fix the root causes (e.g., a security-flawed design) of the problems, so we’ll just have to fix the problems as they come up.”
The next interlude sees more and more evidence of security problems pile up even as the deployment of the flawed technology expands, and both sides argue themselves hoarse.
And we have yet to know the third act, as no consensus on our national priorities—whether and how much we as a nation should be willing to pay for gains from more rapid innovation in the short term with costs incurred in the long term from lack of upfront attention to security—has yet emerged.
I hear crickets.