Skip to main content

Efficient and Interpretable Modular End-to-end Autonomous Driving System

Primary supervisor

Loo Junn Yong

The future of autonomous driving systems holds great promise, offering a solution to address the challenges associated with human errors and the mental fatigue of driving. However, there are trade-offs between the modularity (henceforth interpretability) and the efficiency in existing end-to-end modular autonomous driving models. In this PhD project, student is expected to conduct research in the area of end-to-end modular autonomous driving using computer vison and deep learning methods. This includes developing an efficient and interpretable image processing, vision-based perception and planning modules to accurately predict driving trajectories and controls for high-fidelity autonomous driving. Additionally, student will apply sim-to-real adaptation strategies to generalize deep learning models for deployment in real-world driving scenarios.

Required knowledge

  • First-class bachelor’s honours or master’s degree in computer science, engineering, or a related field.
  • Alternatively, second upper-class degree with a strong research track record and publication history in relevant areas.
  • Demonstrable experience in conducting independent research project in deep learning, computer vision, image processing, or autonomous driving.
  • Proficient coding skills in Python, preferably with hands-on experience on deep learning frameworks such as PyTorch, TensorFlow, or Keras.

Project funding

Project based scholarship

Learn more about minimum entry requirements.