Skip to main content

Don’t Miss the Exit: Identifying Critical States in Sequential Decision-Making for Biodiversity

Primary supervisor

Iadine Chades

Co-supervisors


Optimal policies derived from decision-theoretic models such as Markov Decision Processes (MDPs) often prescribe a single “best” action for every state. However, in real-world conservation contexts, managers rarely follow these prescriptions perfectly—due to uncertainty, limited trust, or operational constraints. This project explores how to make optimal policies more useful and interpretable by helping managers identify which states are critical to get right.

Aim/outline

The project will build on recent research on robust and interpretable planning to formalise and test different ways of defining “critical” in sequential decision problems. For instance, a state may be critical if:

  • A single deviation from the policy at that point causes a large drop in expected performance (sensitivity analysis);

  • Small sequences of incorrect actions lead to irreversible outcomes (“exits”); or

  • The probability of reaching a “bad” terminal state (e.g., species extinction) exceeds a threshold under alternative actions.

Depending on the student’s background, the project may involve:

  • Algorithmic development in Python or MATLAB to compute critical states under different definitions (e.g., sensitivity, adversarial robustness, or chance constraints);

  • Visual analytics to highlight these “critical” regions on policy maps; or

  • Case studies in ecological management (e.g., invasive species control, habitat restoration).

By the end of the project, the student will produce a prototype tool that identifies and visualises critical states in a decision model—offering conservation managers insight into where following the policy matters most.

URLs/references

  • Chades, I., Chapron, G., Cros, M.-J., Garcia, F. and Sabbadin, R. (2014) MDPtoolbox: a multi-platform toolbox to solve stochastic dynamic programming problems.99 Ecography, 37, 916–920.100
  • Puterman, M. L. (2014) Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons
  • Sigaud, O. and Buffet, O. (2013) Markov decision processes in artificial intelligence. John Wiley & Sons.

Required knowledge

An understanding of Markov decision processes (MDPs) or Reinforcement Learning or mathematical programming would be ideal.