When I was younger, I didn’t like to eat my peas. So I always put them off for last, but eventually, I’d realize that it was something I had to do and … just do it.
I feel a little like that in writing about Title I of the Cybersecurity Act of 2012 – the provisions that will create a new regulatory structure for cybersecurity. Reviewing the details has to be done – and it has to be done well. But I hope the bill’s authors will forgive me if I say that working your way through the text is a bit like eating peas – you know it’s good for you in the end but it isn’t exactly what you were hoping to do with the weekend.
So … what do we have in Title I? I’ve already previewed the basic philosophy of the regulatory provisions – they are intended to set performance standards rather than specific technological mandates. As we will see, there is reason to be skeptical even of this prospect – but we should begin by acknowledging that a more intrusive measure might have been considered and that the method chosen is, in some ways, a unique effort and a decided change from traditional programs.
So let’s ask three questions: 1) Who will have to meet the new performance standards? 2) How will the standards be set? And 3) How will the standards be enforced?
Who Is Covered?
As I noted in an earlier post, the basic definition of who is covered by the new regulatory system limits covered cyber infrastructure to systems that would cause catastrophic interruption of life-sustaining services, catastrophic economic damage, or severe degradation of national security capabilities. To determine which systems and assets fall into this category, the bill anticipates a two-stage process: First, the Secretary of DHS is directed to conduct a sector-by-sector analysis to determine which sectors are at greatest risk. Presumably this analysis will both determine which (like, say, the electric grid) are critical and which (say, perhaps, the financial sector) have already taken significant steps to counter an attack.
Then the Secretary will develop a process for designating critical systems within a sector. Once the process is in place, the actual designation can begin – starting with the most at risk systems and assets in the most critical and at risk sectors. One imagines, for example, that larger electrical grids will precede smaller ones in criticality based on the size of the population they serve. Owners who wish to, may challenge their designation as critical through a civil action in Federal court. [I wrote, earlier, about the carve out for commercial IT products – this is where that exclusion would apply]
The bill attempts to further limit the scope of its regulatory ambit by specifying that the new performance standards will not apply if the critical infrastructure system or asset is already being adequately regulated by another federal agency. Presumably, this means that if the Secretary of DHS thinks that the cybersecurity regulations that the FERC has in place for the electric grid are adequate, s/he won’t override them with her own regulations. Likewise, performance standards will not apply if the owner of the critical infrastructure has already taken the necessary steps to protect its critical system or asset from a cyber attack.
So what does this mean? A few observations:
- First, as Stewart Baker (former General Counsel of NSA and former Assistant Secretary for Policy DHS) noted in his testimony before the Senate, limiting coverage to systems whose failure will cause an “extraordinary number” of fatalities is a bit odd. What is an “ordinary” number? I understand why the drafters have written as they did – they want to avoid the charge that they are expanding cybersecurity regulation to cover every cyber system in America. But it is still a bit unsettling.
- Second, as James Lewis of the Center for Strategic and International Studies noted, the entire enterprise of creating a protected list, by definition creates an unprotected list and is a “bit like writing a targeting list of our opponents.” I don’t know how you avoid that problem unless, again, you expand this regulatory structure to be the structure of everything. The reality is we can’t protect all systems all the time.
- Third, on reflection, I think that the two exclusions (for adequate regulation by another body and for having taken steps voluntarily to protect your system) are less than meets the eye. For one thing, it is clear that critical systems will have to meet some standard of protection and whether or not they have done so adequately will, ultimately, be judged by the Secretary of DHS. Thus the “adequacy” of alternatives will, inevitably, converge to whatever standards DHS winds up setting and DHS will have the final word in defining them.
How Do We Set Standards?
OK … so we are going to set up a regulatory structure that is based on performance standards instead of regulatory mandates. How does the bill propose to accomplish that task?
The bill tasks the Secretary of DHS with developing cybersecurity performance requirements. In doing so, the Secretary will consider existing regulations, performance requirements developed by the private sector, and any other industry standards and guidelines identified through a review of existing practices. Once that review of the practices, regulations, and performance requirements is completed, the Secretary will next consider whether or not they are “adequate.” IF they are not, then the Secretary, in consultation with the private sector will develop, on a sector-by-sector basis, risk-based cybersecurity performance requirements for owners of "covered" critical infrastructure.
Finally, section 104(g) of the act provides that the Secretary, “in developing performance requirements shall take into consideration available resources and anticipated consequences of a cyber attack.” This looks a bit like a cost-benefit analysis requirement without actually using those words.
And, indeed, wholly apart from questions about whether the performance requirements will be any good, the main criticism is likely to be that implementing them will simply cost too much. That’s what the US Chamber of Commerce thinks, though DHS Secretary Napolitano disagrees. The truth, I think, is that nobody has any real idea.
The problem with the novel performance standards approach (which, otherwise, is far superior to a command and control system of rules) is that the legislation is really just an agreement to agree. It’s a command to begin a process that identifies standards of cybersecurity protection. Nobody knows what those standards might be in the end. And until the standards are defined nobody can really know how owners will achieve them – and thus nobody can reasonably predict what the costs of compliance will be. They may be cheap and easy if all it takes is to air gap some critical systems. Or they may be as expensive as heck if the only way to achieve compliance is to deploy a suite of hyper-sophisticated intrusion detection systems.
To be sure, the mandate to create a performance requirement is caveated with a number of requirements that are intended to moderate their stringency – consultation with industry; deferral to existing best practices; and consideration of cost. But in the end, the commitment to a performance standard is a bit like a blind bet without looking at your hole card – you know you are in the game; but you can’t be exactly sure how the game will turn out.
One final note reflecting my own (perhaps idiosyncratic) perspective on cybersecurity: Given the predominance of offense over defense (at least at this time) in cyberspace, the likely most effective method of dealing with cyber vulnerabilities is to prepare for failure – that is, to establish plans for continuity of operations. I think it is fair to characterize Title I as focused far more on attack prevention than it is on recovery from attack – the only real mention of resiliency I can find is in section 105(b)(1)(C) where the regulations creating the performance requirements are (briefly) instructed to include rules requiring owners to “develop or update continuity of operations and incident response plans.”
Section 105(c) contains the enforcement provisions of the bill. They begin with there is a requirement that, on an annual basis, owners of covered critical infrastructure must self-certify or submit third-party assessments showing that they have developed measures sufficient to satisfy the cybersecurity performance requirements created under the bill’s regulatory system. Since, right now, the third-party assessment industry is virtually non-existent [ALERT: Good job opportunities out there!] self-certification is likely to be the norm at least initially.
The section also provides that the regulations adopted by DHS should allow for a civil enforcement action and monetary penalties against covered infrastructure operators who do not comply with the requirements and who don’t “remediate the violation within an appropriate time.” I imagine the regulations will, among other things, define an “appropriate time” but here, again, the legislation is basically an instruction to DHS to begin a process. The end results remain obscured in the mists of the future.
The Regulatory Timeline
So, if we are going to have a process, how is that process going to play out and how long will it take? Here, again, Stewart Baker has a good bit of analysis (his testimony can be downloaded here). I won’t bore you with the details but his bottom line is that “a company that simply exercises rights conferred by the title could delay any cybersecurity measures for eight to ten years after enactment.”
There are two ways to think about that sort of timeline. One is to suggest that it is too long and that, therefore, government needs authority to act more quickly. The other (which is more consistent with my own view) is to realize that the regulatory process in America is too slow for this cyber environment and that the game isn’t worth the candle. Either way, the regulatory reality is daunting.
What Lies Ahead
The authors of the Cybersecurity Act of 2012 are to be commended. Their regulatory effort has attempted to avoid the pitfalls of a pure command-and-control regulatory system and that’s a good choice. But even that effort may prove to be a bridge too far.
In the end, the regulatory program is going to be the main field of conflict in the next few weeks. There seems to be an emerging consensus that information sharing is important – but not about the need for a regulatory program which looks like a bit of a leviathan. As Senator McCain said, the Republican alternative bill will “aim to enter into a cooperative relationship with the entire private sector through information sharing, rather than an adversarial one with prescriptive regulations.” But it remains to be seen whether the disagreement over a regulatory structure means that the Senate will also be unable to agree on information sharing provisions.