These days, stories about the use of facial recognition software (FRS) are legion. One of us wrote in January about the Chinese government’s extensive use of FRS. Just this month, U.S. Customs and Border Protection began testing facial recognition technology at around a dozen U.S. airports. The New York Times reported on the use of FRS for security purposes in the private sector, notably in Madison Square Garden and at the American Airlines Center in Dallas. Despite broad experimentation, there is no federal law governing the use of FRS—although Illinois and Texas have laws that mandate informed consent. Whether used by governments or in private enterprise, the technology appears to be developing faster than the law.
Clearly, China is not the model for how a state might use this software responsibly, but its rapid adoption triggers a question: how would a responsible, privacy-respecting state use facial recognition software? Examining the practice in the United Kingdom struck us as a good place to start, given its long history with closed circuit television cameras and some recent, early efforts to use facial recognition software in its policing.
While we will detail the U.K.’s rules and regulations in a later post, we first wanted to outline what we see as the potential costs and benefits of using facial recognition software (FRS) generally. FRS allows its users to compare a given facial image against a large database of stored images to make an identification, or to use a face as a biometric authentication tool. The technology undergirding FRS can also be used to identify behaviors, emotions, and even diseases.
As an agile tool, FRS will benefit different users differently. Governments around the world have begun experimenting with FRS in law enforcement, military, and intelligence operations. Additionally, FRS has the potential to benefit governments in other functions, such as the provision of humanitarian services. Corporations will realize benefits from FRS in innumerable ways over time, but some immediate examples exist in security, marketing, banking, retail, and health care.
Law enforcement and safety
Perhaps the most compelling argument for FRS is that it can make law enforcement more efficient. FRS allows a law enforcement agency to run a photograph of someone just arrested through its databases to identify the person and see if he or she is wanted for other offenses. It also can help law enforcement officers who are out on patrol or monitoring a heavily populated event identify wanted criminals if and as they encounter them.
Imagine a law enforcement officer wearing a body camera with FRS software identifying, from within a huge crowd, a person suspected of planning to detonate a bomb. The ambient presence of FRS applied to a feed from stadium cameras would allow law enforcement to identify dangerous attendees in cooperation with the company managing event security.
There are many contexts in which this law enforcement technology has already been brought to bear. Beyond spotting threats in a crowd, facial recognition software can be used to quickly suss out perpetrators of identity fraud; the New York Department of Motor Vehicles’ Facial Recognition Technology Program has been doing just that, with 21,000 possible identity fraud cases identified since 2010. The U.S. Department of Homeland Security is also experimenting with FRS to assist in identifying abducted and exploited children. Just this month, U.S. Customs and Border Protection has started teaming up with airlines in Boston, Atlanta, Washington, and New York to use FRS for boarding pass screening. In a safety context, some American schools are installing cameras with FRS to identify the presence of gang members, fired employees, and sex offenders on school grounds.
Although intelligence agencies have not publicly affirmed their interest in (or use of) FRS, news reports indicate that the NSA uses sophisticated FRS to exploit the billions of images contained in emails, social media, and video conferences. This obviously facilitates the U.S. government’s ability to find intelligence targets worldwide. Similarly, the Guardian reported that GCHQ (the U.K.’s NSA equivalent) intercepted millions of Yahoo webcam images and used them “for experiments in automated facial recognition, to monitor GCHQ’s existing targets, and to discover new targets of interest.” Used this way, FRS can enhance a state’s national security.
The U.S. military employed FRS in Afghanistan and Iraq to identify potential terrorists and to enhance security in cities, as when the Marines walled off Fallujah and only allowed those who submitted to biometric scanning to enter that city. The tool had other uses as well, including helping Afghan officials to recapture dozens of individuals who had escaped from an Afghan prison in 2011. The military reportedly is awarding contracts to companies that are developing drones that employ FRS to find faces from above, track their targets, and potentially identify “adversarial intent.” In Turkey, the pro-government Daily Sabah reported that the Turkish military would begin a project employing a system called ASTARUS—an artificial intelligence system that uses facial recognition software to identify potential terrorists.
Refugees, internally displaced persons, and lost children may benefit from FRS, which can help with family reunification. A story from 2017 recounts how a 33-year old Chinese citizen who had been abducted by human traffickers at age four was able to reconnect with his family after 27 years because FRS matched his photo at age 10 with a photo his family posted of him at age 4. In light of the millions of people who have been displaced by recent conflicts in Syria and elsewhere, this tool could prove invaluable for reuniting families.
A number of companies are beginning to employ facial recognition software for commercial or convenience purposes. While many companies, like the previously discussed Madison Square Group, are using FRS for internal security purposes, others have developed more creative uses for the technology. Mastercard is using facial recognition tools to allow “pay by face.” Ant Financial, a unit of Alibaba, allows customers to log in to their virtual wallets by taking selfies. Some retailers have begun to use FRS to identify their customers’ preferences based on what items they pick up and what path they take in the store. Others are tailoring advertisements to the excitement or lack of interest on your face as you walk by. (Whether you see this as a benefit or something pernicious depends on your perspective.) In Thailand, FRS is being used in the country’s biggest convenience store chain, 7-Eleven, to analyse customer behavior, including emotional reactions as shoppers walk past shelves or products.
FRS in the health care industry is at the cutting edge of research. While the technology is still developing, researchers have identified the potential for its use in identifying genetic conditions, making other diagnoses, and identifying signs of aging.
But even if these benefits from FRS sound appealing, there are a number of costs associated with the use of FRS. These costs will be borne by governments, private entities, and—importantly—individuals.
One commonly raised concern is that FRS is not 100 percent accurate. Using a particular image to search through a facial database is sure to produce false positives. If the government is conducting the search, this means that some individuals may be subject to questioning or investigation even though the FRS has mis-identified them as a suspect. If a corporation is using the FRS, it could mean that the company mis-identifies a job applicant as having a criminal record and denies him the job. The Electronic Frontier Foundation reports that the accuracy problems are worse for people of color; that is, FRS misidentifies African Americans and ethnic minorities at a higher rate than whites. At least one study conducted by researchers at the Massachusetts Institute of Technology has shown that FRS from IBM, Microsoft and Face++ is less accurate when identifying females. A recent, controversial trial of facial recognition tools at the Notting Hill Carnival in the U.K. resulted in roughly 35 false matches and an erroneous arrest, highlighting questions about police use of the technology.
Although actors such as the FBI have articulated a set of policies that they employ to protect against abuse of FRS, it is not hard to imagine how governments generally could abuse the technology. A report last March found that the FBI was storing about 50 percent of adult Americans’ pictures in facial recognition databases without their knowledge or consent. The biometric database employed by the FBI is called Next Generation Identification and it was launched in 2010, garnering images from law enforcement activities and drivers’ licenses. When the U.S. government accountability office evaluated the FBI’s use of FRS in 2016, it found that it lacked sufficient oversight.
It isn’t too far of a leap to imagine that a government with broad access to stored biometric data might use it to identify and then harass peaceful protestors. Additionally, government officials might search databases for improper purposes. The fact of a government’s use of FRS may simply chill lawful behavior, including attending protests or religious services. (The New York City Police Department ultimately settled a lawsuit brought by Muslim citizens who sought to end NYPD surveillance of local mosques, which included videotaping attendees.)
Some citizens may resent the idea that the government obtains, holds, and uses their biometric data without their consent. (Of course, anyone who holds a passport or has sought a visa should not be surprised that the government at least has this information, even if the individual has not expressly consented to allow the government to retain it and use it as part of the facial “haystack.”) The European Union’s robust privacy laws will raise questions about the legality of new storage regimes and mechanisms for transfer of biometric data, at least within the EU.
Another potential cause for concern is the sharing of data between law enforcement and intelligence agencies. In most countries, law enforcement agencies are subject to greater regulation and transparency requirements than intelligence agencies are. (The United States may be relatively unique in enacting significant protections against domestic collection by its intelligence agencies.) Therefore, even if other states establish clear ground rules for permissible law enforcement uses of FRS, those rules are unlikely to regulate intelligence uses of that same information.
A Russian company launched the FindFace app in 2016. The app lets users identify strangers using pictures of their faces, by matching the photos against hundreds of millions of profile photos from VKontakte, the Russian equivalent of Facebook. Imagine someone working as part of a criminal gang, using FindFace on his cell phone. He could take a photo of you on a train, subway or street; use the app to locate you on social media; and determine where you live. His fellow criminals could then decide that it is good time to break into your house. Identity theft is another vulnerability inherent in the use of FRS. As explained in more detail below, the storage of facial measurements in code makes your facial identity easy to steal and transpose.
It is not hard to conjure up a variety of other ways in which criminals could abuse FRS to stalk, rob, or otherwise abuse you.
Once the government or a corporation has created a database of faces, that data becomes a target for hackers. (The same thing is true, of course, for fingerprints and other biometric data already commonly collected and stored.) Companies—such as Apple—are using feature-based facial recognition, which means the system takes a set of facial measurements, creates a facial architecture in code, and then creates a unique “hash ID.” Anyone who can break into the database of hash IDs can steal that ID and pretend to be you on any platform that uses that database. Moreover, cleaning up afterward is difficult, because unlike a password, you cannot change a face. Of course, it is not just private individuals who could try to access these databases; foreign governments presumably also see these databases as mother lodes of valuable information.
As noted above, corporations are now able to tailor marketing based on the specific identity of the person whose face the FRS is examining or based on the FRS’s interpretation of the mood or reaction of any person whose face is presented to it. These companies are effectively manipulating customers based on facial expressions—something we often have little control over.
There is a final, although less specific, cost worth noting, captured by the fable of the frog in boiling water. According to the fable, a frog placed directly in boiling water will jump out. A frog placed in warm water that is slowly brought to a boil, however, will be cooked to death. Today we may accept modest uses of FRS for beneficial law enforcement and commercial purposes, but ultimately may wake up to a world in which FRS has become pervasive and pernicious—either alone or in connection with a variety of other tools that collect information about us. Corporations and intelligence services are incredibly creative, and there are undoubtedly uses of FRS that we have not yet conceived of but that will surely come to pass. For example, perpendicular to the development of FRS is the development of facial-mapping technology, which is already causing alarm in its potential for “deep fakes” or the ability to fabricate images and videos of people doing things, saying things, or both. Unfortunately, guarding against unwanted uses prospectively, using the law, is very difficult, because legislators are generally not good at predicting future problems.
There is good reason to want an effective set of laws and guidelines for the use of FRS as adoption proliferates across platforms and entities. Without presuming the success of a robust regulatory regime when applied to rapidly evolving technology, establishing frameworks now will both encourage confidence and investment in the development of FRS and protect against dangers already apparent. But different jurisdictions are developing different regulatory regimes at different paces. In our next piece, we will explore how far the United Kingdom has gotten in its efforts to strike the proper balance.