Specialists from Google’s DeepMind subdivision have tested artificial intelligence in the context of the prisoner's dilemma, namely found out how it will make decisions in a situation where the result depends not only on its actions but also on actions of other AI agent.
Core of the experiment
The prisoner’s dilemma is a problem of choice of the best possible solution in certain circumstances. The task was given by American mathematicians Merrill Flood and Melvin Dresher in the middle of the past century.
The core of the dilemma is the following: two prisoners in different cells are offered to confess of crime commission. Depending on their decision, there can be different results:
- both criminals confess – both go to prison for 5 years;
- one confesses and the other doesn’t – the first one is sentenced to 3 years and the second one to 10 years;
- both confess – they are set free.
It is important to note that prisoners cannot contact and agree with each other.
Mathematicians believe that a person always tends to make decisions at minimum risk. Thus, prisoners will likely confess and get equal terms. DeepMind researchers have wondered how AI will act in such a situation.
To conduct the experiment with two AI agents, specialists have created two simple games:
1. Gathering: AI agents should have collected the greatest amount of apples, herewith they could use a laser to immobilize the opponent and win.
2. Wolfpack: two agents should have caught a prey, bypassing obstacles. However, all “hunters”, which were nearby the prey during catching, obtained scores. Therefore, it would be better for AI agents to cooperate and act together.
Eventually, one has found out that artificial intelligence pursues its goals, using the best possible behavior patterns. In the first game, AI agents behaved aggressively when they should have competed. And in the second game, they cooperated, realizing that it was more beneficial for them.
According to researchers, such behavior is attributable to available computer-oriented power. Artificial intelligence uses all possible ways to reach its goals. It doesn’t think like a human being but also doesn’t aim to destroy someone of its kind for instantaneous benefit. It just conducts a given task in the best way.
The results of experiments show that various AI programs can successfully interact while solving certain tasks. For instance, one AI agent can control road traffic and the other is aiming to reduce environmental pollution. At that, they won’t conflict if one initially specifies certain parameters.