1. 程式人生 > >Ethics Forum at Harvard Points to Urgency of AI and Ethics Discussion

Ethics Forum at Harvard Points to Urgency of AI and Ethics Discussion

An elite group of academics, scientists, researchers and standard-setters came together recently at Harvard University to bring new urgency to a discussion of AI and ethics.

The speakers – brought together in a conference called AI-Government and AI Arms Races and Norms, held under  the auspices of the Michael Dukakis Institute for Leadership and Innovation – suggested that in this era of exponential advances in AI, the time is now for the discussion.

Michael Dukakis, chairman of MDI, professor at Northeastern U, former governor of Massachusetts

“We would like to ensure that AI and robotics will be used for the good of humanity. The greatest danger I see is from unconstrained machine learning, where the system can define goals not intended by the designer,” said Matthias Scheutz, director of the Human-Robot Interaction Lab at Tufts University. “The challenge for us now is to have the larger discussion.”

Dr. Scheutz, who has a PhD in philosophy from the University of Vienna and a joint PhD in Cognitive Science and Computer Science from Indiana University, is working on a way to safeguard AI algorithms in chip design.

“The best way to safeguard AI systems is to build ethical mechanisms into the algorithms themselves,” adds Dr. Scheutz. “We need to do ethical testing of the system without the system knowing it. That requires specialized hardware and a virtual machine architecture.”

He presented a diagram depicting a copy of a system being subject to an ethical test; if it fails, the primary system can be terminated. “Special hardware chips have to be legally mandated upon the hardware companies, so that all the microprocessors have the tools built in,” he said. More research is needed to flesh out the proposal, which he intends to bring forward.

The speakers were realistic about the difficulty of the challenge. “Usually AI scientists don’t have ethics as a priority,” said Nazli Choucri, a professor of political science at MIT. “It’s not part of the job description.” The author of 11 books and many articles, Dr. Choucri is a member of the European Academy of Sciences.

Marc Rotenberg, president of the Electronic Privacy Information Center (EPIC) takes the position that, “Knowledge of AI algorithms is a fundamental right.” EPIC has filed several legal suits in pursuit of disclosure, including one to compel the U.S. Department of Homeland Security Transportation Security Administration (TSA) to share information on how it creates profiles for air travelers.

The EPIC mission includes bringing the message to Washington, D.C. “We’re trying to get the administration to create a public process, an open debate on U.S. AI policy,” remarks Rotenberg, who teaches information privacy and open government at Georgetown Law and frequently testifies before the U.S. Congress on emerging privacy and civil liberties issues. “We want there to be a mechanism for public participation. We think algorithmic transparency will be a key in policy formulation.”

The international Organization for Economic Cooperation and Development (OECD), founded in 1961 and now with 36 countries as members, has already commenced work on AI guidelines. Rotenberg notes that, “The Japanese have put forward a set of simple principles that can be easily understood,” that emphasize collaboration and acknowledge risk. “There is urgency to this issue. The gap between informed government decision-making and the reality of progress is growing,” says Rotenberg.

China is a non-member of the OECD but has a working relationship. Rotenberg observes that China is engaged in an extensive data collection effort on its citizens. “Here is a warning to the Chinese,” he said. “There is great risk to all governments that [it] will lose control of these systems.”

Competition Between Countries

How AI will be affecting the competition between countries is coming into focus. “We have a lot of attention on the U.S.-China competition in AI,” said Joseph Nye, professor at Harvard University and former dean of Harvard’s John F. Kennedy School of Government. “China wants to surpass the U.S. in AI and Eric Schmidt [technical advisor to and former executive chairman of Google and its parent, Alphabet], believes it could happen by 2030. That has gotten the attention of the U.S. Congress.”

Nye described an “AI arms race” for geopolitical leadership, a competition between states to have their military forces equipped with the best AI. A recent report by PwC entitled “Nations will spar over AI” suggests the AI arms race has already begun. “That leads to policy discussions, such as on to what extent we want Chinese students at MIT and Caltech for example. We have always had a free and liberal university system. But if a student goes home and helps China get a leg up, should that be allowed?”

Nye noted that Congress is preventing China from buying certain U.S. startups. “Should we restrict access to students if they would compete with us on AI?” he asked.

Nye said that he would not favor such restrictions, the international competition could have “profound effects” on the free and open university system of the U.S.

Equating the advance of AI with the availability of nuclear weapons, Nye stepped through decades of international efforts to restrict the use of nuclear weapons since 1945: a proposed UN Nuclear Test Ban Treaty in 1954; an accepted Partial Test Ban Treaty in 1963; a Non-Proliferation Treaty on Nuclear Weapons in 1968, and the Strategic Arms Limitation Agreement between the U.S. and Soviet Union in 1972.

“It takes a long time and the first efforts don’t go to the heart of the problem,” he said. Still, nations do agree to restrain themselves when it’s in their self-interest. “Previous experience has shown us that norms can work, but it takes time… decades,” Nye said. “Do we have decades for AI? Can we speed it up? Is this a Sputnik moment?” he queried, referring to the first satellite launched into space by the Soviet Union in 1957, beating the U.S. and leading to an intense effort to catch up.

All agreed on the importance of having “humans in the loop,” to require human interactions that can restrain autonomous AI systems when necessary. “The key is to make sure that humans remain in command,” Nye said. “We’re in the early stage of something that will take a good deal of time. The question is, do we have that time?”

The U.S. and China differ in their systems of government and in attitudes towards human rights, for example, privacy. China is known to have embarked on a system to document it citizens using facial recognition, what Nye referred to as a system of “mass surveillance.”

This idea was reinforced by Dr. Hyunjin Seo, an associated professor in the School of Journalism at the University of Kansas, and a Harvard University fellow, who had recently travelled to China. She observed, “The Chinese government is putting in place more extensive control over their citizens,” especially using cameras and mobile devices.

Military Putting Emphasis on Ethics

The U.S. military is putting a high priority on teaching ethics these days. “Every person in the military today is trained on a way to question the ethics of an order,” said Tom Creely, a professor at the U.S. Naval War College. He teaches a course called Ethics of Technology, which now has 17 students who study AI, biotechnology, nanotechnology, information technology, and, “anything that can be weaponized.” “Ethics is essential to what we are doing,” says Creely. “It’s an important topic in the military. And national security is no longer just the Defense Department’s problem. We all need to be part of the conversation.”

Different Perspective on “AI Arms Race”

A different perspective on the interaction of China and the U.S. about AI is expressed in a provocative new book by Kai-Fu Lee, chairman of Sinovation Ventures and the past founding president of Google China. He spoke about his new book, “AI Superpowers: China, Silicon Valley and the New World Order,” recently in the Washington Post. Here is an excerpt:

“An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.”

On Sept. 20, 2018, the Michael Dukakis Institute introduced its partnership with the AI World Conference & Expo, being held in Boston Dec. 3-5, 2018. The partnership has the aim of developing, measuring and tracking the progress of ethical AI policy-making and solution-adoption among governments and corporations. Eliot Weinman, chairman of the AI World Conference and Expo, became a member of the AIWS Standards and Practice Committee.

For more information, go to MDI and to AI World.

— By John P. Desmond, AI Trends Editor