Title: Analysing User Generated Multimodal Content and User Engagement in an Online Social Media Domain
Description: In the context of online social media, much of the research work carried out to date uses the text from user posts and the social network structure. However, the trend in many social media platforms is a move from text to emojis, images, and videos, many of which are “memes” containing images superimposed with text. In this project we wish to analyse multimodal social media data in an entertainment domain. The aims of the project are 1) to analyse trends across different modalities of user generated content, with respect to features such as social media engagement, topics, higher-level concepts of the content and user emotions and engagement and 2) to find how these features correlate with viewing figures. The analysis will be carried out using machine learning and deep learning techniques, in tandem with language models for text representation and interpretation and topic modelling techniques.