To make artwork more accessible to people who are blind or have low vision, museums often offer audio guides or tours. While these options improve accessibility, they do not always provide a complete aesthetic experience. This project will seek to enhance an existing system that automates soundscapes of visual art. The system uses machine learning approaches to automatically generate a musical experience that conveys the emotion and scenery depicted in the artwork. The project will consists on improving the current generative capabilities of the system by using state of the art generative approaches. As part of the project, a qualitative study will be carried out with participants who are blind or have low vision.
The student should have programming skills and a background on machine learning. A musical background is desirable but not essential.