Sami Ul Haq
Title: Towards context-aware evaluation of Multimodal MT systems
Supervision Team: Sheila Castilho, DCU / Yvette Graham, TCD
Description: Context-aware machine translation systems have been raising interest in the community recently. Some work has been done to develop evaluation metrics to improve MT Evaluation considering discourse-level features, context span and appropriate evaluation methodology. However, little has been done to research how context-aware metrics can be developed in the case of multimodal MT systems.
Multimodal content refers to documents which combine text with images and/or video and/or audio. This has a wide range from almost all the web content we view as part of almost all our online activities to much of the messaging we send and receive on WhatsApp and Messenger systems. This project will investigate whether other inputs such as images can be considered as context in the evaluation (along with text) for evaluation of translation quality, and if so, how automatic metrics to account for that multimodal nature can be developed. It will implement document- and context-level techniques being developed for automatic metrics in multimodal MT making use of the multimodal context needed in a multimodal MT scenario.