Skip to main content

[Malaysia] A Study on Multimodal Sentiment Analysis and Emotion Recognition

Primary supervisor

Wai Peng Wong

Co-supervisors


In our current day and age, there is an exponential growth in multimodal data, especially the transition of social media from text-based communications to video formats which can be observed with the rise of TikTok, Youtube, and Instagram Reels. This shift requires a shift in how we analyze multimodal data as we will have to move away from traditional text sentiment analysis such as TextCNN. Multimodal data presents us the opportunity to improve on text-based analysis given the new information that is coded in speech and visuals that can provide additional context for a sentiment.

This project has its real-world applications across different industries. Social media is one aspect which multimodal sentiment analysis can be applied to. VADER (Valence Aware Dictionary for sEntiment Reasoning) for sentiment analysis and JAMMIN for emotion analysis successfully detect hate speech on Facebook. Sentiment analysis in product reviews can greatly benefit companies trying to understand their customers sentiment for their products. In an era of generative AI hype, AI customer service models built on generative AI could benefit from understanding sentiment from a multimodal input to provide the appropriate responses.

Student cohort

Double Semester

Aim/outline

This project aims to achieve the following objectives: 

  1. To run experiments using the baseline models with the benchmark datasets.
  2. To conduct comprehensive analyses of the models using different configuration and settings
  3. To improve accuracy of the models using deep learning techniques
  4. To create a dynamic and interactive GUI 

Required knowledge

NLP, Python programming, (knowledge in running the program on HPC will be a plus point)