Skip to main content

Primary supervisor

Ehsan Shareghi

Co-supervisors

  • Paul Burgess (Faculty of Law)

State-of- the-art Large Language Models hallucinate between 69-88% of responses to legal queries; hallucinate in at least 75% of responses regarding a court's main ruling; and reinforce incorrect legal assumptions and beliefs (Source: first reference under references section below). While there is excitement about LLMs facilitating access to justice by offering a low-cost method for the public to obtain legal advice, their limitations could exacerbate rather than mitigate the problem of access to justice. While there's optimism about LLMs improving access to justice by offering low-cost legal advice, their limitations risk worsening this issue. This project seeks to improve the reliability of LLMs for their application in Law.

Student cohort

Double Semester

Aim/outline

  • An open source code and web-based demo
  • A publication in ACL/EMNLP/NAACL/EACL

Required knowledge

  • Must: fluency in Python and PyTorch
  • Must: academic or working knowledge of Large Language Models
  • Must: fluent in basic machine learning concepts (both theory and hands-on)
  • Preferred: have a background or keen interest in Law
  • Preferred: have built a small fine-tuned language model (i.e., LLaMA)
  • Preferred: prior experience with language agents
  • Preferred: interested in doing a PhD