The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.
In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.
As part of the contract, known as JEDI, engineers would build cloud storage for military data; there are few public details about what else it would entail. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier.
“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI.
Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it. They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.
On October 8, Google announced that it was pulling out of the running for the JEDI contract. Now Microsoft employees are pushing executives to do the same.
“With no transparency in these negotiations, and an opaque ethics body that arbitrates moral decisions, accepting this contract would make it impossible for the average Microsoft employee to know whether or not they are writing code that is intended to harm and surveil,” wrote an anonymous group of Microsoft employees in a letter published Friday (and verified) by Medium.
It’s unclear how many employees are part of the group, but it may not matter, as Microsoft has indicated it won’t drop its bid on the cloud computing contract for the Pentagon.
Internal protests at some of America’s most powerful tech companies reflect mounting employee concerns about the ethical implications of the technology they are developing. Some of their protests have had an impact; others have not. But their calls to put ethics and values before profit are forcing Silicon Valley to consider the moral ramifications of what they’re creating, and whether it’s benefiting humanity and “promoting fundamental human rights”— or doing the opposite.
Employees are worried about government contracts
Employees at different tech companies are worried about different types of projects, but they do have one thing in common: a shared concern about government contracts, and the risk that government officials can use their technology to violate basic human rights. As a worst-case scenario, they often cite the example of IBM’s contract with Nazi Germany, in which the American tech company developed a system that helped Nazis classify, organize and murder Jews.
One technology that workers are concerned about is facial recognition software. A group of 450 Amazon employees reportedly signed a letter asking CEO Jeff Bezos to stop selling its facial recognition software, Rekognition, to law enforcement agencies, according to an Amazon employee who published an anonymous opinion piece Tuesday on Medium (the publishing platform verified the author’s employment at Amazon).
“We cannot avert our eyes from the human cost of our business,” the employee wrote, calling the software a “flawed technology that reinforces existing bias.”
According to the Amazon employee, studies show that facial recognition software often misidentifies people with darker skin. The employee cited a recent test of Amazon’s Rekognition software by the American Civil Liberties Union, which ran photos of every member of Congress against a collection of mugshots. There were 28 false matches, and the incorrect results were disproportionately higher for people of color. But police in Orlando are testing out Amazon’s program on city surveillance cameras, and sheriff’s deputies in Oregon are reportedly using it in the field.
The potential risks are too high, wrote the Amazon employee, who chose to remain anonymous out of fear of professional retribution.
We know from history that new and powerful surveillance tools left unchecked in the hands of the state have been used to target people who have done nothing wrong; in the United States, a lack of public accountability already results in outsized impacts and over-policing of communities of color, immigrants, and people exercising their First Amendment rights. Ignoring these urgent concerns while deploying powerful technologies to government and law enforcement agencies is dangerous and irresponsible.
The anonymous Amazon employee and a group of colleagues outlined these concerns in the letter they sent to Bezos over the summer, which has now been signed by 450 employees. In the letter, they also demanded that Amazon Web Services stop hosting the software firm Palantir, which helps immigration authorities track and deport immigrants. They also asked the CEO to allow employee input on company decisions that raise ethical questions.
As of Tuesday, Bezos had not replied to the letter, but he has since defended the company’s decision to do business with the government (Amazon is also bidding on the JEDI contract). As of press time, Amazon had not responded to questions from Vox about the employees’ demands.
Google employees have had the most impact on corporate decision-making
Thousands of tech workers at Google have been questioning whether the company has “lost its moral compass” in the corporate pursuit to enrich shareholders.
In April, more than 3,000 Google employees protested the company’s military contract with the Pentagon — known as project Maven — which involved technology to analyze drone video footage that could potentially identify and killhuman targets.
About a dozen engineers resigned over what they viewed as an unethical use of artificial intelligence, prompting Google to let the contract expire in June and leading executives to promise that they would never use AI technology to harm others or cause human suffering.
A few months later, an investigation by the Intercept revealed that Google is secretly working on another questionable project: a group of engineers is developing a censored search engine for Chinese officials in Beijing.
The search engine under development, known as project Dragonfly, is designed to hide search results that China’s authoritarian government wants to suppress, such as information about democracy, free speech, peaceful protest, and human rights, according to an investigation published in August by the Intercept.
After the news of Dragonfly leaked in August, more than 1,400 Google employees signed a letter demanding more transparency and accountability about the project’s potential impact on human rights. The controversy has reportedly prompted at least five Google employees to quit in protest.
More than a dozen human rights groups have also urged the company to halt the project. “As it stands, Google risks becoming complicit in the Chinese government’s repression of freedom of speech and human rights in China,” they wrote.
Now Google is reportedly cracking down on employees who say the tool will also allow a Chinese partner to closely track and monitor users.
In addition to hiding search results that the Chinese government wants to suppress, Google’s new search engine would also track a user’s location and would share an individual’s search history with a Chinese partner, who would have “unilateral access” to the data. This includes access to a user’s telephone number, according to an employee memo obtained in September by the Intercept.
Google executives have revealed little about the project, but a Google spokesperson told mein a statement earlier this month that “the work on search has been exploratory, and we are not close to launching a search product in China.”
At an event this week, CEO Sundar Pichai reiterated that stance and defended the project, saying that working in China is a good thing and that Google wouldn’t censor most Chinese search results.
If Google goes ahead with the project, it’s a striking reversal of the strong stance the company took back in 2010, when it decided to leave China in protest of the Chinese government’s hacking of Gmailand its crackdown on free speech. The decision clashes with the principles the company adopted in June after the Pentagon contract controversy, in which Pichai promised that the company would not use artificial intelligence to develop technology “whose purpose contravenes widely accepted principles of international law and human rights.”
Google employees say these kinds of promises are no longer enough, in light of the news about the censorship tool, and they are demanding a more formal role in decisions about the ethical implications of their work.
Even as Google presses forward with Dragonfly despite employee concerns, their demands related to the use of artificial intelligence appear to be having an impact on corporate decision-making.
Google said it would drop its bid on the Pentagon contracts in part because “we couldn’t be assured that it would align with our AI Principles.”
The Tech Workers Coalition, a group of Silicon Valley professionals who advocate in favor of more input in company ethics, said the decision was entirely the result of employee pressure.