I’m an Amazon Employee. My Company Shouldn’t Sell Facial Recognition Tech to Police.
When a company puts new technologies into the world, it has a responsibility to think about the consequences. Amazon, where I work, is currently allowing police departments around the country to purchase its facial recognition product, Rekognition, and I and other employees demand that we stop immediately.
A couple weeks ago, my co-workers delivered a letter to this effect, signed by over 450 employees, to Jeff Bezos and other executives. The letter also contained demands to kick Palantir, the software firm that powers much of ICE’s deportation and tracking program, off Amazon Web Services and to institute employee oversight for ethical decisions.
We know Bezos is aware of these concerns and the industry-wide conversation happening right now. On stage, he acknowledged that big tech’s products might be misused, even exploited, by autocrats. But rather than meaningfully explain how Amazon will act to prevent the bad uses of its own technology, Bezos suggested we wait for society’s “immune response.”
If Amazon waits, we think the harm will be difficult to undo.
After all, our concern isn’t one about some future harm caused by some other company: Amazon is designing, marketing, and selling a system for dangerous mass surveillance right now. Amazon’s website brags of the system’s ability to store and search tens of millions of faces at a time. Law enforcement has already started using facial recognition with virtually no public oversight or debate or restrictions on use from Amazon. Orlando, Florida, is testing Rekognition with live video feeds from surveillance cameras around the city. A sheriff’s department in Oregon is currently using Rekognition to let officers in the field compare photos to a database of mugshots. This is not a hypothetical situation.
If we want to lead, we need to make a choice between people and profits.
We know from history that new and powerful surveillance tools left unchecked in the hands of the state have been used to target people who have done nothing wrong; in the United States, a lack of public accountability already results in outsized impacts and over-policing of communities of color, immigrants, and people exercising their First Amendment rights. Ignoring these urgent concerns while deploying powerful technologies to government and law enforcement agencies is dangerous and irresponsible.
That’s why we were disappointed when Teresa Carlson, vice president of the worldwide public sector of Amazon Web Services, recently said that Amazon “unwaveringly supports” law enforcement, defense, and intelligence customers, even if we don’t “know everything they’re actually utilizing the tool for.” Amazon has even recommended using Rekognition with officer body cameras, which would turn a tool intended for police accountability into mobile surveillance devices aimed at the public. Why are we building this?
We all have the right to go about our lives without being constantly monitored by the government. Companies like ours should not be in the business of facilitating authoritarian surveillance. Not now, not ever. But Rekognition supports just that by pulling dozens of facial IDs from a single frame of video and storing them for later use or instantly comparing them with databases of millions of pictures.
Selling this system runs counter to Amazon’s stated values. We tout ourselves as a customer-centric company, and Bezos has directly spoken out against unethical government policies that target immigrants, like the Muslim ban.
We cannot profit from a subset of powerful customers at the expense of our communities; we cannot avert our eyes from the human cost of our business. The product we’re selling is a flawed technology that reinforces existing bias. Studies have shown that facial recognition is more likely to misidentify people with darker skin. This was clearly demonstrated by a recent test of Rekognition that ran pictures of every member of Congress against a collection of mugshots. There were 28 false matches and the incorrect results were disproportionately higher for people of color. But even if these inaccuracies were fixed, it would still be irresponsible, dangerous, and unethical to allow government use of this software. The existing biases that produced this bias exist within wider society and our justice system. The use of facial recognition will only reproduce and amplify existing systems of oppression.
The current political environment makes the idea of selling facial recognition products to the government even more objectionable. Police have stepped up spying on black activists, and the Trump administration is continuing its all-out assault on immigrants. Supercharging surveillance is not something we want to contribute to in any way. For Amazon to say that we require our Rekognition customers to follow the law is no guarantee of civil liberties at all—it’s a way to avoid taking responsibility for the negative uses of this technology.
We follow in the steps of the Googlers who spoke out against the Maven contract and Microsoft employees who are speaking out against the JEDI contract. Regardless of our views on the military, no one should be profiting from “increasing the lethality” of the military. We will not silently build technology to oppress and kill people, whether in our country or in others.
Amazon talks a lot about values of leadership. If we want to lead, we need to make a choice between people and profits. We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both.