Humanoid communication robot Kirobo talks with Fuminori Kataoka, project general manager from Toyota Motor Corp., during a Tokyo press unveiling in 2013. (Shizuo Kambayashi/Associated Press)

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Francesca Rossi is a professor of computer science at the University of Padova in Italy currently on leave at the IBM T.J. Watson Research Center. She is the former president of the International Joint Conference on AI.

Artificial intelligence has met with persistent skepticism since the term was coined at Dartmouth College in 1955. Recently, however, the research has greatly accelerated.

A society of natural and artificial minds, one in which humans and AI technology live together, is already in place and will be even more prevalent in the future. However, just like any other powerful technology, AI can also be dangerous if misused or not carefully developed. Until now, the emphasis has been on making machines faster and more precise — better able to reach a specific goal set by humans. Today, the aim should be to design intelligent machines capable of making their own good decisions according to a human-aligned value system.

Giving a machine a goal to achieve with ruthless efficiency isn’t going to create synergistic or positive relationships between AI and human beings. As proposed by the Future of Life’s recent open letter advocating for a “robust and beneficial” AI, we should try to build intelligent machines that reach their goals in the most effective manner, but at the same time avoid negative collateral effects, functioning according to principles that are aligned to the human ethical and moral values.

[Other perspectives: Killer robots? Superintelligence? Let’s not get ahead of ourselves.]

Common-sense reasoning and context evaluation will be important, since without these capabilities, machines can’t properly evaluate their impact on the environment or possible conflicts with essential ethical principles. Accountability, transparency, explanations and value alignment will also be crucial. Machines must be able to explain why they make certain decisions or why they are suggesting certain actions. Government and private agencies should facilitate research and development along these lines.

I am leading one of 37 research projects recently funded by the Future of Life Institute, thanks to a generous donation by Elon Musk. My team includes AI researchers, philosophers and psychologists, all of whom believe that the future will see autonomous agents acting in the same environment as humans over extended periods of time, in areas as diverse as driving, assistive technology and health care. In these scenarios, human-machine cooperation to make decisions will be the norm. Thus, our project focuses on embedding ethical principles in collective decision-making systems.

For this cooperation to work safely and beneficially for both humans and machines, artificial agents should follow moral values and ethical principles (appropriate to where they will act), as well as safety constraints. When directed to achieve a set of goals, agents should ensure that their actions do not violate these principles and values overtly, or through negligence by performing risky actions.

It would be easier for humans to accept and trust machines who behave as ethically as we do, and these principles would make it easier for artificial agents to determine their actions and explain their behavior in terms understandable by humans. Moreover, if machines and humans needed to make decisions together, shared moral values and ethical principles would facilitate consensus and compromise. Imagine a room full of physicians trying to decide on the best treatment for a patient with a difficult case. Now add an artificial agent that has read everything that has been written about the patient’s disease and similar cases, and thus can help the physicians compare the options and make a much more informed choice. To be trustworthy, the agent should care about the same values as the physicians: curing the disease should not at detriment of the patient’s well-being.

Embedding ethical principles in a machine is not going to be easy: hard-coding them is not an option, since these machines should adapt over time. Computational issues should be taken into account, and different application areas (not to mention different cultures) demand different principles. Companion robots could be one such case: elderly people in different cultures have different lifestyles and needs, and their companions should care about those — not just generic ideals.

We are, however, confident that we can achieve our goal. But we’ll need to avoid taking extreme positions about AI’s impact, and instead work constructively to take full advantage of its potential. I strongly believe that AI will not replace us: Rather, it will empower us and greatly augment our intelligence. By helping us perform many of our tasks faster and better, AI will give us more time for ourselves and our loved ones — making us more human than ever.

Explore these other perspectives:

Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?

Ari N. Schulman: Do we love robots because we hate ourselves?

Murray Shanahan: Machines may seem intelligent, but it’ll be a while before they actually are

Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.