Bobby Chesney, Susan Hennessey, and Benjamin Wittes just posed a "Quick question" to Apple's HR, that is, does Apple have a backdoor into their employee's work devices? In phrasing this question, it is clear that Bobby, Susan, and Ben are wondering why Apple can’t just turn this on for everybody. There is a quick answer to the quick question: Yes, Apple likely does have a “backdoor” for employee devices. It’s called "Mobile Device Management," (MDM). But it would be catastrophically dangerous to use on a global basis.
MDM adds a separate authorized user in the form of the MDM server, which allows the company to backup, examine, and authorize everything on the device, including seeing the contents of otherwise encrypted iMessages and files. It is the corporate backdoor companies that desire or need to are able to monitor employee activity on otherwise secure devices. It also represents an authorized backdoor, as the person who controls the device must enable it—but subsequently can't disable it without authorization. If San Bernardino's health department had simply configured the MDM they'd already purchased for Farook's iPhone the FBI would have found absolutely nothing on it months ago.
It is a certainty that most Apple corporate iDevices enable MDM.
But MDM is not without its risk. Beyond the additional vunerabilities introduced on the phone through the MDM system, each company using MDM has its own MDM server. An attacker who compromises this server now has control over all the company's iDevices. Suppose Chinese intelligence or a Russian hacker wants to take over a target's iPhone: If the company uses MDM, the attacker could either attack the iPhone or simply go after the MDM server and, through there, take full control over the target's phone and everybody else's phone in the institution.
The MDM server represents a single point of failure for an institution's security posture. Both Apple and (hopefully) the companies deploying MDM are aware of this and take measures to restrict all access to the corporate network and authorized VPNs. Currently the MDM represents a target but not a crucial target: when an attacker has compromised the target's internal network they can just read all the unencrypted email directly. But if a target institution used only secure messaging, the MDM becomes a far easier way to compromise the entire infrastructure. So while today a corporate MDM server is a potential risk but a tolerable one, tomorrow the risk may be significantly higher.
Have there been cases of institutions where attackers targeted the BlackBerry Enterprise Server, which is BlackBerry's equivalent corporate backdoor and has historically been used for financial communication? Probably, but such intrusions are not publicized by either BlackBerry or the targets. We do know that BlackBerry's individual service is both backdoored by design and has turned over the master key to at least one third party.
Imagine that there was a single, global MDM for all iPhones, which is what is necessary to implement Feinstein/Burr or a similar backdoor requirement without simply removing security altogether. In order to provide the content of anybody's communication on demand, Apple would need the capability to compromise everybody's. And in order to be responsive to a wide variety of court orders, such a server would necessarily need to be online or at least nearly online in order to handle all the requests.
It is a near certainty that a Feinstein/Burr's MDM would become a prized target for every attacker on the planet, ranging from national intelligence services to anonymous hackers wanting to look for every nude pic on every iPhone. Do we really want to subject our infrastructure to such a single point of compromise? How would this not greatly compromise our national security?