Skip to main content

A Conversational Agent Interface for Tactile Graphics

Primary supervisor

Kim Marriott

The last two decades have witnessed a sharp rise in the amount of data available to business, government and science. Data visualisations play a crucial role in exploring and understanding this data. They provide an initial grasp of the data and allow the assessment of findings of data analytics techniques. This reliance on visualisations creates a severe accessibility issue
for blind people (by whom we mean people who cannot use graphics even when magnified).

Guidelines for accessible information provision recommend the use of raised line drawings, called tactile graphics, to show spatial data, such as maps or charts. However, tactile drawings must be explored sequentially using touch, making it difficult to quickly obtain an overview of the graphic. To overcome this, guidelines recommend that tactile graphics come with a textual (braille) description. However, such descriptions are fixed and do not allow the blind reader to ask questions about the graphic or the underlying data.

Student cohort

Double Semester

Aim/outline

The aim of this project is to create a conversational agent that will provide a blind person with an initial description of a tactile graphic and which can answer spoken questions about the tactile graphic and the underlying data. 

This is intended to be a proof-of-concept and so will be hard-coded to work with a few representative graphics.

As part of the project you will need to work with blind collaborators to ascertain the kinds of questions they would like to ask and then to evaluate the system.

We plan to employ the conversational agent toolkit Dialogflow (cloud.google.com/dialogflow).

Required knowledge

Basic programming skills.