Primary supervisorJulian Gutierrez Santiago
- Michael Wooldridge, University of Oxford
Research areaComputational and Collective Intelligence
Model checking is an automated formal verification technique in which, given a property F – typically represented as a temporal logic formula – and a model of a system M, one checks whether the system M satisfies property F. Model checking is a well understood formal verification technique supported by several tools, many of which are available online. A little less is known about probabilistic model checking. In this setting, M is a model containing probabilistic behaviour; for instance, M may be a Markov Decision Process (MDP) modelling the behaviour of an AI system, as in Reinforcement Learning (RL). In this case, instead of checking whether F is or is not satisfied by M, one is interested in computing the probability that F is satisfied on M. The goal of this PhD project is to investigate several aspects of automated verification, in particular in the context of AI systems modelled as probabilistic multi-agent systems (MAS) such as those represented as Markov games or Multi-Agent MDPs.
The student should have strong mathematical skills, and be genuinely interested in (and ideally have some knowledge of) game theory, automata theory, computational complexity, and basics of formal logic and proof.