Sarah Sewall, the Speyer distinguished scholar and professor at the Johns Hopkins School of Advanced International Studies, served as undersecretary of state for civilian security, democracy and human rights in the Obama administration.
Google recently pledged to stop using its technology to sort and classify information for the U.S. military, citing employees’ ethical concerns. Then the news broke that the company was quietly developing a censored search engine for China. The principles guiding Google’s use of artificial intelligence still hark back to its original mantra, “Don’t be evil” — but which of the two projects poses the greater evil?
Google’s participation in Project Maven, which uses artificial intelligence to interpret and categorize data for U.S. military forces, prompted internal controversy. Some Google employees quit, and thousands of others expressed opposition to work that could be used for weapons targeting.
I understand why people worry that they would “be evil” by supporting military efforts. In practice, though, such assistance can advance humanitarian ends. When I ran a human rights center at Harvard University, our research showed that the U.S. military could be doing a better job of preventing civilian harm in war. In 2010, I persuaded military leaders to let me lead a comprehensive field study of the problem.
Unlike some nations’ forces, the U.S. military makes significant efforts to mitigate civilian harm. Yet in Afghanistan, American aircrews and ground forces repeatedly misidentified civilians as combatants, for example by mistaking a farmer working with a hoe in the cooler night hours for an enemy emplacing an improvised explosive device. U.S. forces sometimes failed to confirm civilian presence prior to attacks, lacking a “pattern of life” assessment that might have revealed domestic activity within a compound of buildings. So improved target identification could be key to reducing civilian deaths.
Technology can more quickly process the sheer mass of data that surveillance platforms collect and more efficiently learn to identify objects and patterns. Project Maven harnesses machine learning to provide more accurate information for military review. To be sure, more accurate information enables weapons and warfare in general, so pacifists predictably will object. Yet unless we believe that the United States should not be able to fight a war at all, we presumably want to help minimize unintended harm. Better information cannot guarantee the protection of civilians, but it can help avoid the kinds of problems we documented in Afghanistan. There is a plausible ethical case for working on Project Maven.
Project Dragonfly, the censored search engine Google reportedly offered to Chinese authorities, raises a different set of issues. When I joined the State Department in 2014, we already knew the Chinese government systematically repressed and controlled minorities — Tibetans, and now Uighurs as well. Broader government surveillance once focused on lawyers and human rights activists. Gradually, though, the Communist Party sought to define truth for all — restricting what the public can see or know, and penalizing unapproved private expression. Internet search engines help power this coercion.
Having weaponized the Web, the Communist Party now sees advanced technology as key to its control. Using machine learning and artificial intelligence, the government is creating a dystopia. Social credit ratings, gleaned from personal data, are already beginning to reflect individuals’ compliance with party dictates and determine where citizens can travel, go to school and work. Algorithms will soon be in control. No wonder China’s strategy is to become the global leader in artificial intelligence by 2030.
Google deserves credit for leading the way in defining moral responsibilities for the brave new world it is helping to create. Still, its new artificial-intelligence principles, which include not injuring people, violating human rights or causing overall harm, raise so many questions that they cannot guarantee sensible outcomes, as Google’s own actions suggest.
An ethical lens is shaped by its aperture, and there is a broader geopolitical context for corporate decision-making. Perhaps powerful private actors have no ethical or political responsibilities to their nations of origin. Yet even if they do not, should they nonetheless have a moral — or even self-interested — view about sustaining democratic political systems and values?
What a terrific irony it would be if companies forged from abundant U.S. freedom and claiming to promote individualism and openness ultimately helped the Chinese Communist Party consolidate power internally and globally. In the long run, the global stage and the arc of history must be important ethical considerations, and they may also be the key to any ethical company’s survival.