Skip to main content

Robust Neuro-symbolic Planning


Planning is the reasoning side of acting in Artificial Intelligence. Planning automates the selection and the organisation of actions to reach desired states of the world as best as possible. For many real-world planning problems however, it is difficult to obtain the full model of the world that captures its complex dynamics. Fortunately, the unknown parts model can be accurately approximated as neuro-symbolic (deep) neural networks which then can be used in planning. One important limitation of this learning and planning framework is the optimiser’s curse, which is the suboptimal decision-making that is a result of optimal planning with respect to an incorrectly learned neuro-symbolic (neural network) model. In this project, the following two fundamental research questions that are in the core of overcoming the optimiser’s curse will be studied. Namely:

1. How can we robustly plan with respect to the predictions errors (i.e., interpolation and extrapolation prediction errors) of the learned neuro-symbolic (neural network) models?

2. How can the neuro-symbolic (neural network) models be trained that are robust to the optimiser’s curse?

Required knowledge

A successful candidate should have strong programming skills (e.g., in Python) as well as background in at least one of the 
following:

  • (deep) neural networks, and/or
  • automated planning.

Project funding

Project based scholarship

Learn more about minimum entry requirements.