As public surveillance measures increase, the global pandemic is inadvertently offering protection from facial recognition technology. But how long until the algorithms catch up?
So far in 2020, an estimated 15 million to 26 million people in the United States turned out for protests over the death of George Floyd, making it the largest movement in U.S. history. With such a massive uptick in demonstrations came drastically heightened surveillance, including the deployment of drones and helicopters by Homeland Security, which captured over 168 hours of footage from the protests. A growing concern surrounding privacy and anonymity rose as police began to arrest peaceful protestors. Tips for minimizing surveillance risk while protesting, including hiding recognizable tattoos and wearing coverings like sunglasses and face masks, spread across social media.
Protesters have been forced to become more savvy as the use of facial recognition technology by law enforcement has increased. The FBI’s massive facial recognition system can now match against over 640 million photos of Americans it was revealed at a House Oversight Committee hearing last year. In addition to the risks associated with the technology’s bias—it works well on white populations and less so on other demographics—a system capable of tracking individuals based solely on their physical appearance threatens to end anonymity in public life.
How does facial recognition work?
Facial recognition technology identifies individuals based on their facial features and scans a large database of “face prints” created from pre-identified photos of individuals to find a match.
Despite the fact that facial recognition technology has been deployed more widely, the technology still has its flaws. Accuracy can vary significantly based on factors such as the quality of the camera, light, distance, size of the database, the algorithm, and importantly the subject’s race and gender. Facial recognition is historically less accurate when identifying people of color and women. The consequences are troubling, including false arrests: Recently, Robert Julian-Borchak, was misidentified by the algorithm and falsely arrested in Detroit.
Amnesty International is calling for a ban on the technology altogether, citing that this type of surveillance targets the Black community, which is already disproportionately subjected to privacy abuses. “The ability to be part of an anonymous crowd is what allows many people to participate in peaceful protests and to feel safe. Instead of using these technologies to heighten people’s fears, law enforcement should fulfill their obligations to respect and facilitate the right of people to peacefully protest,” said Michael Kleinman, a director at Amnesty International. Cities like San Francisco, Oakland, and Summerville in Massachusetts have all voted to ban the use of facial recognition technology city-wide, citing its unreliable track record.
Protests in the age of Covid-19
One aspect of the recent historic wave of protests could not be predicted. As demonstrations hit cities across America against the backdrop of the pandemic, the widespread use of face masks worn to prevent transmission of Covid-19 has inadvertently provided protection against facial recognition software.
In July, The Intercept reported on an intelligence note written by the Department of Homeland Security, which included the header, “Face Recognition Systems Likely to be Less Effective as Widespread Wear of Face Coverings for Public Safety Purposes Continue.” The bulletin was written in conjunction with Customs and Border Control and Immigration, and was distributed by the Minnesota Fusion Center as the George Floyd and Black Lives Matter protests were forming nationwide. Expressing anxiety that masks would become a roadblock to facial recognition and surveillance technology, the Department of Homeland Security alleged that protestors were exploiting CDC recommendations for mask-wearing in order to evade surveillance.
The use of masks by protestors calls to mind demonstrations in Hong Kong this past October. Law enforcement was accused of carrying cameras on poles to take photos of protestors. The government banned protestors from wearing masks and facial coverings as it made participants harder to identify, but in a policy reversal following the outbreak of Covid-19 enforced masks to help limit the spread of the virus.
Will law enforcement adapt to mask-wearers?
The US National Institute of Standards and Technology (NIST) recently revealed that face masks fully covering an individual’s nose and mouth cause an error rate in facial recognition between 5% and 50%. In their study, NIST tested one-to-one matching scenarios, such as what would be used for passport control and border crossing, where the algorithm compares the individual to the photo on their ID. They plan to test mass-surveillance scenarios (called one-to-many systems) later this year.
But adapting algorithms to identify those wearing masks seems to be a priority for many companies and for the government. A research institute in Wuhan has made their “Real World Facial Recognition” data set open source. The institute claims that this data has the potential to train AI to identify masked people with 95% accuracy. Meanwhile, AI developer pdActive worked with the US Department of Defense to create advanced facial recognition technology and behavioral analysis, to identify mask-wearers and provide accurate information for what they call “Covid-19 relief efforts.” The company previously provided Fortune 500 companies with theft-reduction technology in the form of facial recognition. And they’re not the only company whose algorithms have been able to identify mask-wearers—multiple firms in China and the U.S. including Panasonic have claimed to develop the technology as early as February, according to Recode. Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project referred to the rapid development of this technology as a “global arms race” to track and identify people even with masks on.
Hanvon, a software company in China, claims to have developed facial recognition tech to identify a crowd of 30 people in seconds—even with masks. The company has hundreds of clients in Beijing including the police department. But the technology still faces limitations.
Expanding Hanvon’s tech to a large group would be challenging: “…[the] technology struggles to identify people who are wearing both a mask and another accessory that would result in losing key facial information. This may include sunglasses, beards, or even winter wear that partially covers the face.” This type of tech would likely only work in ideal conditions, such as head-on angles and good lighting, something that is unlikely to be adapted to a large crowd of protestors. Other systems have been tested as well, with little positive results. According to The Intercept, IPVM, an independent firm which tracks surveillance technology, tested four facial recognition systems this February finding that their performance was drastically reduced with masked faces.
As the Department of Homeland Security, which oversees ICE and CBP has long been accused of overreach through surveillance tech, perhaps opposition to this type of public surveillance will come swiftly, even as facial recognition algorithms inevitably catch up. Just as wearing masks amid a pandemic preserves our health and the health of those around us, we must be vigilant with protecting our privacy and our civil liberties, especially given the troubling rise of facism in this country as recently documented by The New York Times.
As Dashlane’s CMO, Joy Howard, puts it, “Privacy is nothing more than the ability to make our own choices without fear. Day to day, surveillance breeds conformity, which is anathema to creativity. Over time it emboldens and empowers tyranny absolutely.”