top of page

Adaptive Playing and Opponent Modelling in Competitive Games

Master of Science in Artificial Intelligence and Robotics

Author: Francesco Frattolillo

Thesis Advisor: Prof. Luca Iocchi 

Co-advisors:

Prof. Roberto Capobianco

Phd Nicolò Brandizzi

Abstract

Despite the success of Artificial Intelligence systems as Deep Blue, AlphaGo and OpenAI five, most of the modern commercial videogames still rely on the use of scripts. This is mainly due to the fact that developers are concerned about unpredictable behavior of AI.

The ability to create a balanced matching between the user's skills and the level of difficulty of the game can greatly improve the user's gaming experience.

Previous attempts at achieving this goal have required a large amount of data from human matches or specific knowledge related to the game.

In our work, we have proposed a new method that allows us to eliminate the demand for data from games between human players and to decrease the amount of specific knowledge related to the game needed.

Train and Store

In this section we train an agent through reinforcement learning and self-play tecniques. 

During this phase we evaluate the agent and store some of its best policies for the next step. 

Learning_Behaviour.png
Algorithm1_v2.png

Adaptive Playing

We use the previously stored policies to create a dataset of matches between agents using these policies. 

We train a model to recognize the level of the agents by looking at their sequence of moves.

Learning Behaviour

This is a script that allows an external user to use the system.

At the end of each match, the system readjusts the opponent's level.

Thesis PDF

bottom of page