0-days, N-Days, iPhones, and Android

By Nicholas Weaver
Tuesday, August 30, 2016, 12:13 PM

Recently, I wrote the follow on the student site for CS 161 (Computer Security) which I'm co-teaching this semester at Berkeley.

A zero day is a vulnerability unknown to the rest of the world: there is no patch and there is no defense.

Once discovered and a patch is available, it is now best described as an N-day vulnerability. Everybody knows about it but there are defenses.

In general, zero days are relatively rare. They take work to discover and especially against a modern, hardened system a "fully weaponized zero day" is actually a chain of zero day exploits, all of which need to work.  If discovered it ceases to be a zero day. If you want to buy a zero day (you can), be prepared to spend serious money.

N-days, however, are quite common. Often, people create exploits for N days for fun, for educational purposes, and because it is easy: you just compare the patch with the original code. Exploit frameworks like Metasploit in particular often get exploits added within a couple of days of a patch's release. If you want an N-day, just download it.

Every attacker loves N-days, from the Annoyingly Persistent Teenagers to run-of-the-mill criminals to the Advanced Persistent Threats (think NSA, China, Russia, etc).  N-days are cheap (often free), readily available, and it is difficult to trace who's using them even if caught.

This is why patching is so critical: If your system is updated all these bad actors are Out of Luck.

Unfortunately, now you have the problem of patch availability and economic incentives (remember, security often comes down to money), and this interacts poorly with the Android ecosystem.

Apple has a clear incentive to support their phones: They don't make money just on the phone, but also 30% of all the music and apps you pay for.  Plus, Apple has a relatively easy process: For every patch, you need to check it out on every version of the device with a large suite of "regression tests," a large suite of tests that make sure you didn't break something else. Fortunately for Apple, you can count the # of devices on your fingers and toes—you've got 4s, 5, 5s, SE, 6, 6+, 6s, 6s+, and assorted iPads and iPods.  True there are 3-4 models of each one of those (for different frequency bands and cellular technologies) but its still a very small space.

The opposite is true for Android. A Nexus phone is like an iPhone. Google gets revenue from the continued usage of the phone, so Nexus phones get patched. Unfortunately Nexus devices aren't cheap. Google initially subsidized them in order to shame OEMs into doing a better job but these days the cheapest Nexus is only $50 cheaper than an iPhone SE. Worse, you can't just walk into a corner cellphone store and buy a Nexus.

But for all the rest, the Android manufacturers don't gain recurring revenue, so they have little reason to care about patches. Worse, even if they want to patch it also has to go through a carrier approval process and the carrier often has to care too. The situation is slightly better on the high-end Android phones, because people who spend a lot of money on a device are going to be mighty peeved if they know its abandoned, but even flagship devices can often suffer substantial delays in receiving critical updates.

This is why I say: "To secure an iPhone, just keep it patched.  To secure a non-Nexus Android, power it off and throw it in a landfill."

The above illustrates both a technical and policy problem—the later of which I do not have a solution for but is in dire need of resolution. Currently, mobile phone security is a luxury good. If you can afford an iPhone or Nexus, you generally get to be safe against criminals, bad actors, and even nation states will think twice before attacking you as a target since compromising a zero-day has substantial cost. But for the rest of the world, mobile phone security is a dismal situation, one that imposes significant costs to society.

My question for Lawfare readers: how do we change this?