Will Our Robot Overlords Work Together Or Work Against Each Other?

One day, robots and computers with artificial intelligence will inevitably be tasked with managing everything from our economy to our traffic systems. But will these man-made managers have the empathy, reasoning, and emotions needed for cooperation?

That’s the question asked by DeepMind, the A.I. subsidiary of Google parent company Alphabet Inc. In a new study [PDF], DeepMind researchers attempt to determine under what circumstances selfish agents — either humans or robots — would work together toward one goal.

To determine this, the researchers pitted the robots — or artificial intelligence agents — against one another in two “social dilemmas.”

The outcomes of the dilemmas, DeepMind says in a blog post, could enable companies to better create systems to manage things like the economy, traffic, and environmental challenges.

In the first social dilemma, researchers required two agents to maximize the number of apples they can collect. Each apple collected results in a reward for the robot.

The game allowed the agents to “tag” the other, temporarily removing them from the game. However, that action does not result in a reward for the original robot.

When there were enough apples to go around, the agents tended to work peacefully. But when researchers limited the number of available apples, the agents found it was easier to tag their opponent and take time to collect apples themselves.

While the Gathering game showed that the agents would look out for themselves when the situation became more difficult, the second game showed the opposite.

The second game, dubbed Wolfpack, required robots to hunt for a third in an obstacle-filled environment. Points are awarded to the agent who captures the prey and to those in the same vicinity.

Researchers note that because of the way points were rewarded, the robots were more likely to implement complex strategies to cooperate.

“Depending on the situation, having a greater capacity to implement complex strategies may yield either more or less cooperation,” the researchers say in a blog post. “The new framework of sequential social dilemmas allows us to take into account not only the outcome of the interaction, but also the difficulty of learning to implement a given strategy.”

In the end, the researchers believe that the interaction of the agents was mainly based on the rules and complexities they were faced with.

For example, in the Gathering game the cleverer AI agent decided it was better to be aggressive in all situations because zapping was more challenging, as the agent had to aim the beam at the other agent and track their movements.

In the Wolfpack game, cooperating was more challenging for the robots as they worked to track the prey.

Based on the study, the researchers say they may be able to better understand and control systems that depend on continued cooperation.

Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.