Skip to main content

A Neuro-Symbolic Agent for Playing Minecraft


In this project, you will build an autonomous agent in the MineRL environment for playing Minecraft or an agent for Animal-AI.  Herein, you will learn how to incorporate symbolic prior knowledge for improving the performance of an agent trained by using deep reinforcement learning (RL) technique, which is the core technique to build AlphaGo. An RL-based agent learns a stochastic policy to decide which action to take in the next step. Correct choices of actions will be rewarded by the gaming environment. Symbolic knowledge will be used to build a constraint action space for an agent at each time step such that infeasible actions are excluded through (defeasible) reasoning. As a result, an agent would benefit from both symbolic knowledge and strengths of deep reinforcement learning.     

 

Project duration (between 1 & 12 weeks): 12 weeks 

Prerequisites: FIT2014, MAT1830, FIT3080, FIT5201

Student cohort

Single Semester

Aim/outline

The project aims to apply or devise a novel neuro-symbolic reinforcement learning approach for building agents in a gaming environment. 

URLs/references

Mitchener, Ludovico, David Tuckey, Matthew Crosby, and Alessandra Russo. "Detect, Understand, Act: A Neuro-symbolic Hierarchical Reinforcement Learning Framework." Machine Learning 111, no. 4 (2022): 1523-1549.

Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert et al. "Mastering the game of go without human knowledge." nature 550, no. 7676 (2017): 354-359.