d-real is funded by Science Foundation Ireland and by the contributions of industry partners.
Student Name: Assadig Almhdy Abdelrhman Abakr
Title: Intelligent Edge Computing for Low-Latency XR Holographic Communications
Supervision Team: Steven Davy, TU Dublin / John Dingliana, TCD / Owais Bin Zuber, Huawei Ireland Research Centre
Description: This proposed PhD project aims to reduce the high bandwidth and ultra-low latency requirements of extended reality (XR) holographic communications, which currently limit the potential of this technology. By leveraging artificial intelligence (AI) and edge computing techniques, the project aims to reduce the amount of data that needs to be transmitted over the network and optimize data transmission, resulting in a more seamless and immersive experience for users. The project will investigate the use of machine learning algorithms to intelligently filter and compress 4D light field data, the use of edge computing to process the data on either end of the communication, and the use of multi-path networks to optimize data transmission. The project will work closely with Huawei to develop business cases, and test beds for the technology. The project has the potential to unlock new use cases for XR communication, enabling remote collaboration, education, and telemedicine.
Student Name: Bharat Agarwal (graduated Oct. 2023)
Title: Energy-Oriented and Quality-Aware Network Path Selection to Support Differentiated Adaptive Multimedia Delivery in Heterogeneous Mobile Network Environments
Supervision Team: Gabriel-Miro Muntean, DCU / Marco Ruffini, TCD
Description: The existing various technology-based wireless networks provided by businesses, public institutions, etc. establish a perfect heterogeneous wireless network environment which can provide ubiquitous connectivity. Since mobile device owners are much of the time on the move and use them anywhere and anytime, energy efficiency is of paramount importance when delivering data to mobile devices in general, and in particular rich media content. Moreover, the users always prefer high quality content, which requires more energy to transmit and process. Consequently there is a need of a trade-off between quality and energy levels. This project will develop an energy-oriented mechanism to select suitable delivery paths for multimedia streaming services in heterogeneous wireless network environments in order to both save energy and maintain high levels of service quality.
Student Name: Jesús Aguilar López
Title: Verbal Language to Irish Sign Language Machine Translation: A Linguistically Informed Approach
Supervision Team: Irene Murtagh, TU Dublin / Andy Way, DCU / John Kelleher, TCD
Description: Machine translation (MT) of verbal language (speech/text) has garnered widespread attention over the past 60 years. On the other hand, computational processing of signed language has unfortunately not received nearly as much attention, resulting in its exclusion from modern language technologies. This exclusion, leaves deaf and hard-of-hearing individuals at a disadvantage, aggravating the human-to-human communication barrier, while suppressing an already under resourced set of languages further for the estimated 72 million deaf people in the world. This aim of this project is to develop a linguistically motivated sign language machine translation (SLMT) avatar that will translate between English (text/speech) and Irish Sign Language (ISL). The project will focus, in particular, on current linguistic and technical challenges in relation the computational modelling and processing of sign language. This will involve research in sign language linguistics, computational linguistics, natural language processing and virtual character animation.
Student Name: Asena Akkaya
Title: Investigating objective neural indices of music understanding
Supervision Team: Giovanni Di Liberto, TCD / Shirley Coyle, DCU
Description: Music is ubiquitous in our daily life. Yet it remains unclear how our brains make a sense of complex music sounds, leading to music enjoyment, contributing to the regulation of mood, anxiety, pain, and perceived exertion during exercise. A recent methodological breakthrough demonstrated that brain electrical signals recorded with electroencephalography (EEG) during music listening reflect the listener’s attempt to predict upcoming sounds. This project aims to identify objective metrics of music processing based on EEG, pupillometry and other sensing modalities in progressively more ecologically-valid settings. The project will culminate in the realisation of a brain-computer interface that informs us on the level of music “understanding” in real-time. In doing so, the project will offer a new methodology with various potential applications in brain health research (e.g., hearing-impairment, dementia, anxiety disorders).
Student Name: André Almo
Title: Inclusive Maths: Designing Intelligent and Adaptable Educational Games to Reduce Maths Anxiety in Primary Schools
Supervision Team: Pierpaolo Dondio, TU Dublin / Attracta Brennan, UoG
Description: Maths Anxiety is a condition affecting one out of six students worldwide. Despite the fact that digital games have been widely used to support children’s mathematical skills, results regarding their effect on Maths Anxiety are inconclusive. Potential explanations are the scarcity of Maths-related games able to adapt to the learner, and lack of games explicitly designed to deal with Maths Anxiety. Inclusive Maths seeks to investigate if the introduction of adaptation and anxiety-aware features in digital games for Primary School can improve students’ performance and reduce their Maths anxiety. Our hypothesis is that by adding adaptation to Maths games, anxious students will feel more confident in playing the game. By introducing anxiety-aware features such as the emphasis on the storytelling elements of the game, individual rewards system, interactive and collaborative game-modes players will feel more engaged. The project will evaluate the games developed during three cycles of experimentations in 30 participating schools.
Student Name: Kunchala Anil
Title: Privacy-preserving Pedestrian Movement Analysis in Complex Public Spaces
Supervision Team: Bianca Schoen-Phelan, TU Dublin / Mélanie Bouroche, TCD
Description: Smart cities should encourage the use of walking as it is one of the most sustainable and healthiest modes of transport. However, designing public spaces to be inviting to pedestrians is an unresolved challenge due to the wide range of possible routes and complex dynamics of crowds. New technologies such as Internet of Things (IoT), video analysis, and infrared sensing provide an unprecedented opportunity to analyse pedestrian movements in much greater detail. Any data captured for analysis must also protect the privacy of pedestrians to avoid identification via direct imagining or movement patterns. This project pushes the state-of-the-art pedestrian movement analysis by proposing the use of 3D multi-modal data from outdoor locations for quantitative wireframe representations of individuals as well as groups, which involves crowd movement simulation, IOT data capture, privacy-preserving data analytics, the smart city paradigm and health and wellness.
Student Name: Folashade Fatima Badmos
Title: Co-design of an interactive wellness park: A multimodal physical web installation
Supervision Team: Damon Berry, TU Dublin / Mads Haahr, TCD/ Emma Murphy TU Dublin
Description: Physical rehabilitation is a critical and widely applicable healthcare intervention. Increased engagement in rehabilitation improves health outcomes and reduces healthcare costs. Prescribed outdoor physical exercise can promote social interaction and improve quality of life. For this work, a co-designed physical web installation will be created to make managed rehabilitation exercises more engaging and sustainable. QR codes, NFC, and BLE will enable low-barrier connections to web resources to support managed exercise. The user interface will be co-created by service users and clinicians informed by behaviour change theory to create a personalised and accessible outdoor digital rehabilitation intervention. The proposed system will comprise – Physical web installation residing on wooden posts in a healthcare campus. – Web infrastructure enabling exercise regimes to be personalised. Through co-design with stakeholders, physical web UX will be investigated to assess different approaches in order to produce a working installation and to develop an accessible and reproducible design.
Student Name: Jessica Bagnall
Title: Deep Learning for Magnetic Resonance Quantitative Susceptibility Mapping of carotid plaques
Supervision Team: Caitríona Lally, TCD / Catherine Mooney, UCD / Brooke Tornifoglio, TCD and Karin Shmueli, UCL
Description: Carotid artery disease is the leading cause of ischaemic stroke. The current standard-of-care involves removing plaques that narrow a carotid artery by more than 50%. The degree of vessel occlusion, however, is a poor indication of plaque rupture risk, which is ultimately what leads to stroke. Plaque mechanical integrity is the critical factor which determines the risk of plaque rupture, where the mechanical strength of this tissue is governed by its composition. Using machine learning approaches and both in vitro and in vivo imaging, and in particular Quantitative Susceptibility Mapping metrics obtained from MRI, we propose to non-invasively determine plaque composition and hence vulnerability of carotid plaques to rupture. This highly collaborative project has the potential to change diagnosis and treatment of vulnerable carotid plaques using non-ionizing MR imaging which would be truly transformative for carotid artery disease management.
Student Name: Seth Grace Banaga
Title: Embody Me: Achieving Proxy Agency through Embodiment in Mixed Reality
Supervision Team: Carol O’Sullivan, TCD / Gearóid Ó Laighin, UoG
Description: An important factor in achieving believable and natural interactions in Virtual and Mixed Reality systems is the sense of personal agency, i.e., when a user feels that they are both controlling their own body and affecting the external environment e.g., picking up a ball and throwing it. The most natural way to give a sense of agency and control to a user is to enable them to use their own natural body motions to effect change in the environment. However, in restricted spaces or if the user has a disability, this may not always be possible. In this PhD project, we will investigate the effects of different strategies for non-direct agency in the environment, from simple device inputs, through to controlling an embodied virtual agent. This will involve animating the motions of a virtual avatar, and controlling the motions of this avatar using a variety of different methods (e.g., game controller, gesture, voice).
Student Name: Dipto Barman
Title: Personalised Support for Reconciling Health and Wellbeing Information of Varying Complexity and Veracity Towards Positive Behavioural Change
Supervision Team: Owen Conlan, TCD / Jane Suiter, DCU
Description: This project will introduce a new approach to visual scrutability that can facilitate users in examining complex and sometimes conflicting information, specifically in the context of personal health-care, towards changing behavior to improve their health. The research will examine how scrutability, an approach to facilitating the inspection and alteration of user models that underpins a system’s interaction with a user, may provide a means for empowering users to interact with such difficult to reconcile information in order to promote behaviour change. Visual metaphors can empower the user to scrutinise and reflect on their understanding development, knowledge acquisition and the validity of different sources of information. Defining a new approach that will enable users to reflect upon and control what information they consume in a personalised manner is proposed as a key element in fostering enhanced understanding of their own health-care and wellbeing. This research will build on results from the H2020 Provenance and ProACT projects.
Student Name: Shubhajit Basak (graduated, 2024)
Title: Advanced Facial Models for Rendering of Virtual Humans
Supervision Team: Michael Schukat, UoG / Rachel McDonnell, TCD
Description: This PhD project will build on current state-of-the-art applying advanced facial generation techniques from deep learning, such as StyleGAN, to build photo-realistic ‘random’ 3D facial models that can be subsequently rendered to multiple 2D poses. A particular focus of this research will be on the photorealistic rendering of facial textures at NIR and LWIR frequencies so that these models can generate multi-wavelength representations. In addition this work will seek to go beyond existing facial datasets by rendering additional ground truth information from the 3D animations.
Student Name: Maryam Basereh
Title: Automatic Transparency Evaluation for Open Knowledge Extraction Systems
Supervision Team: Rob Brennan, UCD / Gareth Jones, DCU
Description: Open Knowledge Extraction (OKE) is the automatic extraction of structured knowledge from unstructured/semi-structured text and representing and publishing the knowledge as Linked Data (Nuzzolese et al. 2015). Due to their scalability in searching and extracting knowledge, the use of OKE systems as the fundamental component of advanced knowledge services is growing. However, similar to a lot of other modern AI-based systems, most OKE systems use non-transparent algorithms. This means that their processes and outputs are not understandable and explainable, the system’s accountability cannot be guaranteed, and in case of any adverse outcome, explanations cannot be provided. Transparency is one of the AI governance main components, which is necessary for accountability (Diakopoulos 2016, Reddy et al. 2020, Lepri et al. 2018, Winfield et al. 2019). GDPR also requires transparency by affirming “The right to explanation” and restricting automated decision-making (Goodman and Flaxman 2017). In order to enhance the transparency of OKE systems, it is important to be able to evaluate their transparency, automatically. Due to the lack of research in this area and the importance of transparency, the focus of this research is on studying automatic transparency evaluation for OKE systems.
Student Name: Michael Gringo Angelo Bayona
Title: Mobile apps for advanced language learning (speaking/listening)
Supervision Team: Elaine Uí Dhonnchadha, TCD / Andrew Hines, UCD
Description: Speaking and listening in a new language are critical when moving country, especially for those moving for a job or to study in a university where programmes are not delivered in their native language. Opportunities to practice language skills with relevant content, accents and dialects for proficiency in an academic subject with specialised/technical vocabulary are usually limited prior to arriving in the destination country. What if there was a convenient, personalised mobile application that was tailored to an individual to provide targeted learning support based on their language and vocabulary preferences? This project will develop methods to generate user-specific listening materials and computer based systems that can measure and provide feedback on mispronunciation. This project will give the student an opportunity to learn about linguistics, speech signal processing, Natural Language Processing (NLP), and state of the art machine learning. Recordings, as well as NLP and text-to-speech synthesis with state of the art voice conversion (VC) will be used to generate content that is relevant to the language learners’ area of study in order to generate listening/reading exercises. Deep Neural Networks using transfer learning will be applied to mispronunciation detection. Computer aided pronunciation teaching (CAPT) will extend AI techniques in the domains of automatic speech recognition (ASR) and speech quality assessment to evaluate the learner’s speech and provide personalised feedback.
Student Name: Dan Bigioi
Title: Multi-Lingual Lip-Synch – Text to Speech & Audio Representation
Supervision Team: Peter Corcoran, UoG / Rachel McDonnell, TCD / Naomi Harte, TCD
Description: This project will apply some of the latest deep learning techniques to build specialised datasets and train advanced AI neural network models to deliver a real-time multi-lingual lip-synch for speakers in a video. This project will focus on conversion of text subtitles into an intermediate speech representation suitable across multiple languages (e.g. phonemes). The preparation and automated annotation of specialised datasets provides an opportunity for high-impact research contributions from this project. The researcher on this project will collaborate with a 2nd PhD student who will focus on photo-realistic lip-synching of the speech data. Both PhDs will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner. The end goal is a practical production pipeline, inspired by Obamanet, for multi-lingual over-dubbing of video content from multi-lingual subtitles.
Student Name: Bojana Bjegojević
Title: Fatigue monitoring and prediction in Rail applications (FRAIL)
Supervision Team: Maria Chiara Leva, TU Dublin / Sam Cromie, TCD / Dr Nora Balfe, TCD / Dr Luca Longo, TU Dublin
Description: The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. The objective of this project is to test the most relevant alternative mobile wearable and non-wearable unobtrusive technologies to monitor fatigue in three different working environments in Irish Rail (e.g. considering train driving, signaling and maintenance tasks) and to develop a protocol to combine biosensor and or mobile performance data acquisition (e.g. mobile version of the Psychomotor Vigilance Task, unobtrusive eye tracking devices, wearable HRV measurement devices, physical activity monitoring, mobile EEG etc.) related to fatigue assessment with self-reported journal/self-assessment data to help operator and organizations to monitor levels of fatigue experienced. The project will ultimately deliver a proposal for a user-friendly tool for monitoring fatigue to assist with continuous improvement in the area of Fatigue Risk Management.
Student Name: Nour Boulahcen
Title: BioVerse: a Multidirectional Framework for Human-Plant-Computer Interfaces
Supervision Team: Gareth W. Young, TCD / David Coyle, UCD / Carol O’Sullivan, TCD
Description: Contemporary technological advancements are significantly impacting the environment. The production and usage of electronic devices are detrimental, depleting resources and harming natural ecosystems. Simultaneously, urban and residential areas often lack greenery, leading to a disconnection from nature, contributing to widespread feelings of despondency, and disrupting the ecological balance. Furthermore, many technological innovations primarily cater to WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries, neglecting diverse demographics such as the youth, elderly, and those with alternative cognitive perspectives.
However, there is promise in applying Human-Plant-Computer systems to address these challenges. By seamlessly integrating natural and virtual plants, these systems enable emotional responses and a reconnection with the natural world within a technological context, particularly in education and urban development.
This research aims to understand the intricate interactions between plants, computers, and humans, harnessing botanical knowledge to drive progress in education, environmental sustainability, and urban development while considering ecological and social implications.
Student Name: Rob Bowman (submitted PhD, await viva, 2024)
Title: Designing Conversational User Interfaces for More Effective Mood Logging
Supervision Team: Gavin Doherty, TCD/ Benjamin R. Cowan, UCD / Anja Thieme, Microsoft Research Cambridge
Description: Self-monitoring activities, such as mood logging, are a central part of many treatments for mental health problems. They serve to help raise awareness of the person’s own feelings, daily activities and cognitive processes, and provide information that can inform future treatment. One interesting possibility for supporting such self-disclosure is through conversational user interfaces, allowing users to disclose sensitive information without judgment, facilitating more honest reflection and emotional reaction. Currently there is little understanding about 1) the opportunities and challenges that potential users see in using conversational interfaces for mood logging; 2) appropriate design parameters (e.g. appropriate dialogue structure, linguistic strategies) and their effect on user engagement; and, critically, 3) how effective this interaction would be in producing honest and frequent reporting in a real world deployment. This PhD will aim to target these three challenge areas.
This PhD is also supported by Microsoft Research through its PhD Scholarship Programme.
Student Name: Vicent Briva-Iglesias (graduated)
Title: Understanding the Experience of Interactive Machine Translation
Supervision Team: Sharon O’Brien, DCU / Benjamin R. Cowan, UCD
Description: An increasing amount of information is being translated using Machine Translation. Despite major improvements in quality, intervention is still required by a human if quality is to be trusted. This intervention takes the form of “post-editing”, i.e. identifying and quickly fixing MT errors. This tends to be done in a serial manner, that is, the source text is sent to the MT engine, an MT suggestion appears, and the editor assesses it and fixes it, if necessary (termed “traditional” post-editing). Recently, a new modality has been developed called Interactive Machine Translation (IMT), involving real-time adaptation of the MT suggestion as the editor makes choices and implements edits. Very little is understood about this new form of human-machine interaction. This PhD will fill this knowledge gap by researching the cognitive effort involved in this type of translation production; how perceptions of system competence and trust influences decision-making; and how these evolve over time and experience.
Student Name: Sarah E. Carter (graduated, 2024)
Title: Privacy and ethical value elicitation in smartphone-based data collection
Supervision Team: Heike Schmidt-Felzmann, UoG / Mathieu D’Aquin, Université de Lorraine, LORIA / Kathryn Cormican, UoG / Dave Lewis, TCD
Description: Data collection is increasingly being carried out through personal mobile devices. A key challenge is dealing with privacy and ethics in a way which is meaningful to the users. GDPR compliance is an obvious aspect, with much of it being handled by existing frameworks already (i.e. consent gathering, secure data handling, etc). There is however an increasing concern around the area of data ethics that the technical handling of GDPR is not sufficient to address. In this project, we will investigate ways to introduce a continuous alignment between participants’ privacy preferences and ethical values in smartphone-based data collection.
Student Name: Jackey J. K. Chai
Title: Simulation and Perception of Physically-based Collisions in Mixed Reality Applications
Supervision Team: Carol O’Sullivan, TCD / Brendan Rooney, UCD
Description: In this project, we will research new methods for simulating collisions and contacts between objects that move according to the laws of physics, when one of those objects is real and the other one virtual (e.g., virtual ball bouncing on a real table, or vice versa). We will also explore the factors that affect the perception of physically based interactions between real and virtual objects (both rigid and deformable). Multisensory (i.e., vision, sound, touch) simulation models of physically plausible interactions will be developed, using captured data and machine learning (ML) and driven by our new perceptual metrics. Our real-time simulations will be applied in Mixed Reality (MR) environments, which will be displayed using a variety of technologies, including the MS Hololens, projection-based MR and hand-held devices.
Student Name: Yifan Chen
Title: Exploring Immersive Storytelling for Digitally-Enhanced Co-creative Practices
Supervision Team: Gareth W. Young, TCD / Sam Redfern, UoG)
Description: The project “Exploring Immersive Storytelling for Enhanced Co-creative Practices” investigates integrating immersive extended reality (XR) technologies (VR, AR, spatial computing, etc.) with co-creative processes. It aims to develop a theoretical framework and innovative methodologies to enhance collaboration through immersive storytelling. By leveraging digital platforms, the project seeks to create real-time, interactive experiences that engage participants more profoundly and foster creativity. Practical applications in medicine, education, entertainment, and collaborative work environments will be explored, demonstrating the potential for immersive storytelling to transform co-creative practices. This research aligns with the d-real initiative’s goals by advancing digital media research, developing cutting-edge digital platforms, and applying findings to diverse domains, thus contributing to interdisciplinary collaboration and technological advancement.
Student Name: Orla Cooney
Title: Designing speech agents to support mental health
Supervision Team: Benjamin R. Cowan, UCD / Gavin Doherty, TCD
Description: Voice User Interfaces (VUIs) could be highly effective for delivering mental health interventions, allowing users to disclose sensitive information and engage without judgment. This PhD will fuse knowledge from HCI , psycholinguistics, sociolinguistics, speech technology and mental health research to identify how to most effectively design VUIs for mental health applications. In particular, the PhD will focus on how 1) speech synthesis; 2) the linguistic content of utterances; 3) the type of dialogue used by such agents impact engagement in mental health intervention. This high impact research topic will break new ground by demonstrating the impact of speech interface design choices on user experience and user engagement in this context. The successful candidate will be based at University College Dublin (UCD) and will be part of the HCI@UCD group.
Student Name: Rose Connolly
Title: Don’t Stand So Close to Me: Proxemics and Gaze Behaviors in the Metaverse
Supervision Team: Rachel McDonnell, TCD / Cathy Ennis, TU Dublin / Victor Zordan, Principal Scientist Roblox
Description: Given the prolific rise of the Metaverse, understanding how people connect socially via avatars in immersive virtual reality has become increasingly important. Current social platforms do not model non-verbal behaviors well such as proxemics (how close people stand to one another), and mutual gaze (whether or not they are looking at one another). However, these cues are extremely important in social interactions and communication. In this project, we will record and investigate real eye gaze and proxemics in groups and build computational models to improve avatar motions in interactive immersive virtual spaces. This position is partially supported by funds from Roblox Corporation.
Student Name: Rory Coyne
Title: Merging psychology and technology: Non-contact monitoring of stress, cognitive load and fatigue in automated driving systems
Supervision Team: Jane Walsh, UoG / Alan Smeaton, DCU / Peter Corcoran, UoG
Description: Automated driving systems (ADS) represent a promising instance of leveraging artificial intelligence to enhance user experience and vehicle safety. To facilitate seamless interaction between the user and system, driver monitoring systems (DMS) are being developed which aim to monitor the physiological state of the driver and intervene when the user’s psychological state reaches critical levels. However, the existing technology relies on invasive sensors, and thus can be used only for validation purposes. The purpose of the present research is to develop and establish a protocol for the measurement of driver psychophysiology and classification of target states – stress, cognitive load and fatigue – using non-invasive, sensor-based methods. A secondary aim of the present research is to examine drivers’ psychophysiological responses to stress, cognitive load and fatigue while using automated driving systems. Physiological parameters (heart rate, respiration rate, electrodermal activity, eye-tracking) will be measured using state-of-the-art near-infrared imaging techniques. The extracted data will be classified using machine learning algorithms to obtain a measure of driver states. The results will inform the development of DMS in on-road settings as ADS continue to emerge at the consumer level. This research will be conducted in collaboration with Xperi, representing an exciting integration of behavioural science with technology.
Student Name: Eduardo Cueto Mendoza
Title: Model Compression for Deep Neural Networks
Supervision Team: John Kelleher, TUD / Rozenn Dahyot, Maynooth University
Description: Deep learning has revolutionized digital media analysis, be it video video, image, text, or indeed multimedia. The deep learning revolution is driven by three long-term trends: Big Data, more powerful computers (GPUs/TPUs), and ever larger models. At the same time there has been an increase in edge computing and the deployment of deep neural networks to devices that have limited resources (such as memory, energy or bandwidth). This project will explore the development of novel cost functions, tailored to deep learning models for video and image analysis; compression techniques for deep neural networks for video and image analysis; and error analysis for model compression techniques.
Student Name: Sam Davern
Title: Procedural Generation of Narrative Puzzles
Supervision Team: Mads Haahr, TCD / Marguerite Barry, UCD
Description: Narrative puzzles are puzzles that form part of the progression of a narrative, whose solutions involve exploration and logical as well as creative thinking. They are key components of adventure and story-driven games, and often feature in large open-world games. However, filling large open worlds with engaging content is challenging, especially for games with procedurally generated worlds, such as Minecraft (2011) and No Man’s Sky (2016). Systems exist for generating narrative puzzles procedurally, but they lack context about many narrative elements, such as character motivation, plot progression, dramatic arc, as well as player modelling. This project will improve procedurally generation of narratives for small-scale narrative games as well as large-scale open world games by integrating new types of narrative elements as well as player modelling into the Story Puzzle Heuristics for Interactive Narrative eXperiences (SPHINX) framework, potentially resulting in dynamically generated narratives of increased sophistication and significantly improved player experience.
Student Name: Prasanjit Dey
Title: Monitoring and Short-term Forecasting of Atmospheric Air Pollutants Using Deep Neural Networks
Supervision Team: Bianca Schoen-Phelan, TU Dublin / Soumyabrata Dev, UCD
Description: Air pollution is a persistent problem in most of the world’s cities. It has a significant negative influence on citizen health and quality of life. Therefore, it is important to continuously monitor the pollution concentrations and provide short-term forecasts. Historically, this forecasting has been done quite poorly; traditional statistical forecasting methods are unreliable for short-term predictions. Most models are statistical and are limited in range of forecast time and effectiveness. The goal of this PhD project is to create an intelligence system that uses a combination of computer vision and deep learning technologies to identify, monitor, and forecast air pollution in real time, as well as provide residents with an early warning system. This PhD project will also assess the key meteorological variables that affect atmospheric air pollutant concentrations and examine the forecasting model’s effectiveness for the island of Ireland, particularly for the Dublin metropolitan area.
Student Name: Sukriti Dhang
Title: Real-time Vision-based Product Placements in Multimedia Videos
Supervision Team: Soumyabrata Dev, UCD / Mimi Zhang, TCD
Description: Product placement and embedding marketing are recently used extensively for advertisement in today’s skip-ad generation. In this PhD project, we use computer vision and deep learning techniques to accurately perform product placement in multimedia videos. We intend to use convolutional neural networks for accurately detecting existing adverts in videos, tracking them across image frames, and replacing them with new advertisements for targeted audiences. The designed neural networks will be evaluated on available manually annotated data and synthetic datasets. Such developed techniques will have wide-ranging impacts on a variety of applications, including sports billboard marketing, retail fashion advertising, and amongst others.
Student Name: Kavach Dheer
Title: Exploring the Integration of Emotional, Cognitive and BioPhysical Sensing into Recommender Systems for Digital Entertainment
Supervision Team: Josephine Griffith, UoG / Robert Ross, TU Dublin / Peter Corcoran, UoG and Joe Lemley, Xperi
Description: Passive sensing of a user’s emotional state is challenging without measuring biophysical signals, although there has been progress in determining emotional states from video-based facial analysis, human-speech analysis and combined approaches. Cognition and stress-assessment are niche areas of research but are important recently in driver monitoring systems. This research will explore new approaches to combine SotA hybrid speech/imaging techniques to perform a real-time emotional/cognitive state assessment (ECSA) of a user interacting with a recommender system. In parallel the recommender model will be adapted to respond to the emotional/cognitive inputs, employing these to dynamically adapt the outputs provided to the user, based on their assessed emotional/cognitive states. As an indicative example, an in-car entertainment system might take decisions between suggesting video entertainment for occupants, or music for the driver, based on ECSAs of both driver and occupants, using data from an in-car camera and microphone.
Student Name: Johanna Didion
Title: Blended Intelligence and Human Agency
Supervision Team: David Coyle, UCD / Gavin Doherty, TCD
Description: In cognitive-science the sense of agency is defined as the experience of controlling one’s actions and, through this control, effecting the external world. It is a crosscutting experience, linking to concepts such as free-will and causality and having a significant impact on how we perceive the world. This project investigates people’s sense of agency when interacting with intelligent systems (e.g. voice agents). Whereas past research has explored situations where actions can be clearly identified as voluntary or involuntary, intelligent environments blur this distinction. E.g. intelligent systems often interpret our intentions and act on our behalf. How this blending of intention/action impacts the sense of agency is poorly understood. This project involves the development of speech and VR systems that require human-computer cooperation and conduct studies to assess the impact of blended intelligence on people’s experience of agency. The research has direct implications for technologies ranging from driverless cars to intelligent personal assistants in phones.
Student Name: Haoyang Du
Title: Talk to me: Creating plausible speech-driven conversational characters and gestures
Supervision Team: Cathy Ennis, TU Dublin / Rachel McDonnell, TCD / Benjamin Cowan, UCD and Julie Berndsen, UCD
Description: Interaction with virtual characters has provided increased engagement and opportunities for immersion for players in a social context for many purposes. With the advance of spaces like the Metaverse and applications such as ChatGPT, the demand for engaging virtual characters who can generate plausible gestures and behaviours for speech will only increase. In any space that allows for embodied interaction, when players/users can be represented by a virtual avatar or where they interact with a virtual character, exchanges can become more engaging. However, the requirements of real-time dynamic interactions pose a serious challenge for developers; plausible and engaging behaviour and animation for these characters is required in scenarios where it is impossible to script exactly what types of actions might be required. We aim to tackle part of this problem by investigating speech driven non-verbal social behaviours for virtual avatars (such as conversational body motion and gestures) and develop ways to generate plausible interactions with them in real-time interactive scenarios.
Student Name: João Duarte
Title: Controllable Consistent Timbre Synthesis
Supervision Team: Seán O’Leary, TU Dublin / Naomi Harte, TCD
Description: The goal of the research will be to provide control over the design of consistent musical instruments. Until recently sound synthesis has been dominated by two approaches – physical modelling and signal modelling. Physical models specify a source. Once the source is specified the family of sounds coming from the source can be synthesised. Signal models, on the other hand, specify waveforms and so are very general. The major downside of signal models is that many parameters are required to specify a single sound. The goal of this project is to use machine learning algorithms to synthesise the parameters for a family of sounds related to a single source. This project will marry machine learning and signal processing techniques, including research into the use of generative algorithms, signal models and sound representations.
Student Name: Cormac (Patrick Cormac) English
Title: Accommodating Accents: Investigating accent models for spoken language interaction
Supervision Team: Julie Berndsen, UCD / John Kelleher, TU Dublin
Description: The recognition and identification of non-native accents is fundamental to successful human-human speech communication and to communication between humans and machines. Much of current speech recognition now uses deep learning but a recent focus on interpretability allows for a deeper investigation of the common properties of spoken languages and language varieties which underpin different accents. This PhD project proposes to focus on what can be learned about non-canonical accents and to appropriately adjust the speech to accommodate the (machine or human) interlocutors by incorporating results of existing perceptual studies. The project aims to advance the state-of-the-art in spoken language conversational user interaction by exploiting the speaker variation space to accommodate non-native or dialect speakers. This will involve research into what are the salient phonetic properties identified during speech recognition that relate to non-canonical accents, and how can the speech or further dialogue be adjusted to ensure successful communication.
Student Name: Megan Fahy
Title: Fostering digital social capital through design for wellbeing
Supervision Team: Marguerite Barry, UCD / Gavin Doherty, TCD / Jane Walsh, UoG
Description: Studies in eHealth have shown that communication technologies can support intervention and treatment through the exchange of ‘social capital’ (SC), a concept from social science with a range of individual and societal benefits. Although a strong association between SC and wellbeing has been identified, there is a lack of empirical data and consistent methods to foster social capital through design. This project begins by systematically exploring the position of social capital within HCI to identify core design challenges for eHealth. Then, using participatory design methods, it will prototype technologies and develop a novel design framework based on platform independent interactive features of digital applications.
Student Name: Ciara Finnegan
Title: How Full is Your Tank…Development of a ‘Readiness-to-Perform’ Tool for Monitoring & Predicting Sporting Performance
Supervision Team: Anna Donnla O’Hagan, DCU / Jane Walsh, UoG / Sarah Meegan, DCU
Description: The sports industry is a multi-billion-dollar industry in which coaches and sport and medicine teams strive to push and progress an athlete’s performance year on year. Coaches and scientists structure training programmes with distinct periods of progressive overload coupled with recovery in an attempt to maximise or sustain performance during specific periods of competition. The objective of this research is to test the most relevant existing, portable, technologies to monitor athletes’ physical, cognitive, and psychological readiness for sporting performance and to develop a protocol combining bio-sensory and performance-based data acquisition (e.g., mobile version of Psychomotor Vigilance Task, CANTAB, wearable HRV/blood pressure devices, mobile EEG ECG etc.) related to performance readiness to aid athletes and coaching personnel on an athletes individualised readiness for sporting performance. The project will ultimately deliver a proposal for a user-friendly tool for monitoring performance readiness to assist with continuous improvement in sporting performance and training practices.
Student Name: Aisling Flynn (graduated)
Title: A VR social connecting space for improved quality of life of persons with dementia
Supervision Team: Dympna Casey, UoG / Marguerite Barry, UCD / Attracta Brennan, UoG / Sam Redfern, UoG
Description: Reminiscence and music are two key strategies used to promote social connectedness and reduce loneliness. Listening to and sharing music and recalling events from the past connect the person with dementia again to the present enabling them to converse, interact and socialize. This research will create a set of meaningful multi-user VR spaces for people with dementia focused on providing opportunities to reminisce and to engage in music-based activities. VR design skills will mix with user-centric design and public and patient involvement (PPI) to deliver an effective set of meaningful VR experiences that can be widely deployed to benefit persons living with dementia.
Student Name: Frank Fowley (submitted PhD, 2024)
Title: Towards a Translation Avatar for ISL – TrAvIS
Supervision Team: Anthony Ventresque, UCD / Carol O’Sullivan, TCD / Simon Caton, UCD
Description: Members of the Deaf community face huge barriers to access of essential services including health, education, and entertainment. Many people in the Deaf community have a low level of English literacy so although the Internet and other technology have improved access for some, the majority of the people in Ireland who use Irish Sign Language (ISL) as their first and/or preferred language struggle to access vital services. The aim of this PhD project is to propose a tool that translates spoken English to ISL and vice versa in real-time using a virtual avatar to improve accessibility and raise awareness of Sign Language and the issues faced by the Deaf community. This will involve research in Computer Vision, Machine Learning, Computational Linguistics, Audio Processing and Virtual Character Animation.
Student Name: Xiangpeng Fu
Title: Interfacing the Real and the Virtual using Mixed Reality and Projection Mapping
Supervision Team: Mads Haahr, TCD / Cathy Ennis, TU Dublin
Description: This project hypothesises that combining MR with projection mapping can offer considerable improvements in closely synchronised real and virtual environments to the benefit of new types UI affordances and new applications. Most current MR research is concerned with mapping events and actions from the real to the virtual, but through the use of projection mapping, a convincing mapping can be made also from the virtual to the real. Research questions: How can real and virtual environments be constructed and programmed using MR and projection mapping in tandem? What are the most suitable UI affordances for the resulting hybrid environments, and how is the user experience best evaluated? What are the best application domains for such environments? The questions will be explored through literature review, design and development of a prototype and user study. Possible application domains include industrial applications, cultural heritage, museum exhibits, art installations, training/education, health/wellbeing and the Metaverse.
Student Name: Angel Mary George
Title: Personalisation of relapse risk in autoimmune disease
Supervision Team: Mark Little, TCD / John Kelleher, TU Dublin / Declan O’Sullivan, TCD and Alain Pitiot, Ilixa software
Description: The PARADISE study targets development of a clinical decision support tool that personalises immunosuppressive drug (ISD) therapy in autoimmune disease. Using ANCA vasculitis as the exemplar condition and leveraging off the Rare Kidney Disease registry and biobank, we will focus on deep phenotyping of the patient in remission. At this time point, we hypothesise that residual sub-clinical immune system activation renders the patient at high risk of subsequent relapse of the disease. Conversely, reversion of the immune system to a healthy resting state may indicate a very low flare risk. By using a novel semantic web technology, we will integrate clinical, patient app-derived and multi-modal biomarker data streams to generate explainable machine learning models that predict the risk of flare. These will inform the physician about increasing ISDs or, indeed, discontinuing them altogether. We envisage that this assessment will reduce both relapse and ISD-associated infection, reduce healthcare costs, increase quality of life and build human capital in a research area of importance to Ireland.
Student Name: Jack Geraghty
Title: Multimodal data wrangling for Real-time IoT-based Urban Emergency Response Systems
Supervision Team: Fatemeh Golpayegani, UCD / Andrew Hines, UCD / Rob Brennan, UCD
Description: Emergency Response Systems (ERS) enable the rapid location of emergencies and deployment of resources by emergency response teams. Historically this has been as a result of an emergency call from a person at the scene. Technology advancements in urban areas and so-called smart cities mean that Internet of Things-enabled infrastructure can offer a “single strike” data dump of multimodal information via the ERS. For example: in a vehicle collision, information regarding the crash severity, number of passengers, fuel type, etc. can be gathered from in-place cross-platform sensors including vehicles or smartphones’ audio, and accelerometer sensors, traffic cameras, etc. This information may be valuable to fire crews, ER staff and other members of the response team. The technical challenges to be addressed by this project will focus on audio and video processing, data collection and curation and applying data-driven learning (e.g. deep learning and knowledge graphs) to cross-platform knowledge models. The student will identify and prioritise data sources, build a framework to integrate and generalize multi-modal data, and demonstrate how multiple platforms can assist in real-time ERS decision making.
Student Name: Michael Gian V. Gonzales
Title: On-Device Neural Speech Understanding for consumer devices
Supervision Team: Michael Schukat, UoG / Naomi Harte, TCD / Peter Corcoran, UoG / Gabriel Costache, Xperi / Martin Walsh, Xperi
Description: Today’s voice-based interfaces rely on a cloud-based infrastructure for data processing and interpretation that causes issues with regard to access to personal voice-data by large corporations. Therefore, there is a growing trend in industry to move data-processing and analysis closer to the source of data – the microphone that senses speech data. This can be achieved using newly-developed neural accelerators (i.e. NVIDIA’s Jetson, Google’s TPU, Xilinx Vitus-AI and Perceive’s Ergo) that implement emerging neural-processing techniques. This research aims to investigate emerging trends in speech analysis and understanding with a focus on neural implementations and optimizations for the above accelerator platforms. It will examine memory and data-bandwidth aspects of recurrence in neural speech analysis, explore neural speech enhancement techniques to pre-process voice signals that are picked up from low-cost microphones, explore speech representations for neural accelerator platforms, and deliver a proof-of-concept smart-speaker, demonstrating feasibility of a practical stand-alone neural speech interface.
Student Name: Yasmine Guendouz
Title: Using Machine Learning to identify the critical features of carotid artery plaque vulnerability from Ultrasound images
Supervision Team: Caitríona Lally, TCD / Catherine Mooney, UCD
Description: Over one million people in Europe have severe carotid artery stenosis, which may rupture causing stroke, the leading cause of disability and the third leading cause of death in the Western World. This project aims to develop a validated means of assessing the predisposition of a specific plaque to rupture using Ultrasound (US) imaging. Using machine learning (ML) techniques, we will look at multiple US modalities concomitantly; including B-mode images, elastography and US imaging with novel contrast agents for identifying neovascularisation. Combining this with in silico modelling of the vessels will provide us with a unique capability to verify the clinical and US findings by looking at the loading on the plaques and therefore the potential risk of plaque rupture. Proof of the diagnostic capabilities of ML and non-invasive, non-ionising US imaging in vivo for the diagnosis of vulnerable carotid artery disease would be a ground-breaking advancement in early vascular disease diagnosis.
Student Name: Naile Burcu Hacioglu
Title: Establishing Design Principles to Combat Over-reliance on Day-to-Day Technologies and Cognitive Decline
Supervision Team: Hyowon Lee, DCU / Maria Chiara Leva, TU Dublin
Description: The concept of “usability” in the field of Human-Computer Interaction strives to make our technologies better fit to our tasks, by improved efficiency, ease of use and satisfaction. Most of the web services and apps we use today in our everyday lives try to save our mental efforts in recall (e.g. reminder app), vigilance (e.g. notification), arithmetic (e.g. calculator app), spatial cognition (e.g. GPS step-by-step instruction), etc. While the immediacy and convenience is one of the main reasons we use these in the first place, increasing anecdotal and scientific evidences suggest that an extended reliance to these tools can result in negative consequences in our cognition. Through a series of iterations of brainstorming, interaction sketching/design and usability testing, this project will construct a new set of usability principles and guidelines for designing new user-interfaces that minimise the potentially negative impact of over-reliance to day-to-day technologies.
Student Name: Mahmoud Hamash
Title: Designing VR environments for post-primary education
Supervision Team: Peter Tiernan, DCU / Gareth W. Young, TCD
Description: Virtual Reality (VR) research has developed at pace in fields such as construction, engineering, and healthcare, with promising results. However, the use and development of VR environments in post-primary education settings remains low. There is a need for research that not only examines teachers’ perceptions of and attitudes toward VR but focuses on the development of bespoke VR environments which meet the needs of post-primary teachers and their students and can demonstrate an impact on educational and motivational outcomes. This PhD will focus on designing, developing, and evaluating research and practitioner-informed VR environments for post-primary teachers and their students. The study will engage with practising post-primary teachers to identify appropriate curricular areas which could benefit from the integration of VR environments. Together with existing literature and case studies, these curricular areas will be used as a basis to develop VR environments for post-primary education. Practising post-primary teachers will help to inform the design and development of environments according to their curricular goals and student needs. VR environments will then be trialed and evaluated with teachers and their students and findings will be used to inform the use of VR in post-primary education.
Student Name: Timothy Hanley
Title: Non-Contact Sensing and Multi-modal Imaging for Driver Drowsiness Estimation
Supervision Team: Michael Schukat, UoG / Maria Chiara Leva, TU Dublin / Peter Corcoran, UoG and Joe Lemley, Xperi
Description: Drowsiness, i.e., the unintentional and uncontrollable need to sleep, is responsible for 20-30% of road traffic accidents. A recent WHO statistic shows that road accidents are ranked eighth as primary cause of death in the world, resulting in more than 1.35 million deaths annually. As a result, driver monitoring systems containing drowsiness detection capabilities are becoming more common and will be mandatory in new vehicles from 2025 onwards. But there remain significant challenges and unexplored areas, particularly surrounding multimodal imaging (NIR, LWIR and neuromorphic) techniques for drowsiness estimation. Therefore, the overall aim of this research is to improve and implement novel neural AI algorithms for non-contact drowsiness detection that can be used in unconstrained driving environments. In detail, this research will examine SoA non-contact driver drowsiness techniques, evaluate existing and research new drowsiness indictors, and build / validate innovative complex machine learning models that are trained with both public and specialized industry datasets.
Student Name: Syed Mohammad Haseeb ul Hassan
Title: Maintaining Flow in a Virtual Reality / Augmented Reality Environment
Supervision Team: Jennifer McManis, DCU / Attracta Brennan, UoG
Description: Virtual and Augmented reality (VR/AR) has redefined the interface between the digital and physical world, enabling innovative applications in the areas of entertainment, e-commerce, education, and training. Key to the attraction of VR/AR applications is the ability to provide an immersive experience, characterised by the concept of flow – the idea that the user can become “lost in the moment”. However, computer network constraints can interfere with data delivery and reduce the user’s Quality of Experience (QoE), interfering with their sense of flow. This project will focus on personalisation of VR/AR content delivery to maintain user QoE at a high level. A VR/AR Content Personalisation Algorithm will adapt content delivered according to user preferences and operational information about current network and device conditions. Key to this project’s success will be User Profiling and VR/AR QoE modelling, as well as a methodology to assess the impact of VR/AR on a user’s flow.
Student Name: Muzhaffar Hazman
Title: Analysing User Generated Multimodal Content and User Engagement in an Online Social Media Domain
Supervision Team: Josephine Griffith, UoG / Susan McKeever, TU Dublin
Description: In the context of online social media, much of the research work carried out to date uses the text from user posts and the social network structure. However, the trend in many social media platforms is a move from text to emojis, images, and videos, many of which are “memes” containing images superimposed with text. In this project we wish to analyse multimodal social media data in an entertainment domain. The aims of the project are 1) to analyse trends across different modalities of user generated content, with respect to features such as social media engagement, topics, higher-level concepts of the content and user emotions and engagement and 2) to find how these features correlate with viewing figures. The analysis will be carried out using machine learning and deep learning techniques, in tandem with language models for text representation and interpretation and topic modelling techniques.
Student Name: Yuan He
Title: Motion in the Metaverse: Perception of identity and personality from embodied humans Engagement in an Online Social Media Domain
Supervision Team: Rachel McDonnell, TCD / Brendan Rooney, UCD / Aphra Kerr, TU Dublin
Description: The Metaverse is expected to be a digital reality combining social media and gaming, with augmented and virtual reality, allowing users to interact virtually for both work and entertainment. The Metaverse cannot exist without avatars, which are virtual manifestations of the humans interacting in the space. In this project, we will investigate the perception of motion of avatars in social interactions in immersive Virtual Reality. In particular, we are interested in how the quality of motion mapped from the human affects perception of personality and personal identity. Additionally, the ethical, legal and social implications around how human motion data is captured and stored in the Metaverse will be investigated as part of this project.
Student Name: David Healy (graduated, 2024)
Title: Using digitally-enhanced reality to reduce sedentary behaviour at home in an elderly population
Supervision Team: Jane Walsh, UoG / Owen Conlan, TCD
Description: It is well known that regular physical activity (PA) limits the development and progression of chronic diseases and disabling conditions. However, time spent in sedentary behaviour (SB) has increased substantially over the last three decades and increases with age. The project will explore health behaviour change from a behavioural science perspective using the ‘person-based approach’ and will develop appropriate personalised behaviour change techniques (BCTs) integrated into VR systems to effectively reduce sedentary behaviour in older adults at home.
Student Name: Darragh Higgins (passed viva, 2024)
Title: VRFaces: Next-generation performance capture for digital humans in immersive VR
Supervision Team: Rachel McDonnell, TCD / Benjamin R. Cowan, UCD
Description: It is expected that immersive conferencing in virtual reality will replace audio and video in the future. By using embodied avatars in VR to act as digital representations of meeting participants, this could revolutionize the way business is conducted in the future, allowing for much richer experiences incorporating the interpersonal communication that occurs through body language and social cues. However, creating virtual replicas of human beings is still one of the greatest challenges in the field of Computer Graphics. The aim of this project is to advance the animation technology considerably to allow a user to truly “become” their virtual character, feeling ownership of their virtual face, with near cinema quality facial animation.
Student Name: Helen Husca
Title: Balancing Privacy and Innovation in Data Curation for VR/AR/XR
Supervision Team: Gareth W. Young, TCD / Dympna O’Sullivan, DCU / Harshvardhan Pandit, DCU
Description: This project will investigate and develop a framework for extended reality (XR) technologies with concerns regarding security, privacy, and data protection. This focus is needed as XR technology requires the collecting, processing, and transferring of (often sensitive) personal data. The appointed researcher will look at balancing innovation with privacy and data protection issues in XR. More specifically, they will identify and develop new ways to understand, analyze, and extend the use of existing or available XR data and data flows in ways that respect privacy and autonomy in emergent metaverse applications.
Student Name: Emily Ip
Title: When digital feels human: Investigating dialogue naturalness with multivariate neural data
Supervision Team: Giovanni M. Di Liberto, TCD / Benjamin R. Cowan, UCD
Description: The interaction with digital systems has become a pervasive daily experience (e.g., video-calls, dialogue systems). One major barrier remains that users need to adapt the way they communicate to each particular digital system, for example constituting a challenge for inclusivity. This project will identify objective indices that quantify the naturalness of a conversation by using bio-signals recorded with electroencephalography and pupillometry. These metrics will inform us on how exactly different digital communication strategies (e.g., video-call software) impact cognition (e.g., cognitive load, phonological processing, temporal expectations). In doing so, this project will inform us on the key elements for producing adaptive dialogue systems.
Student Name: Zhannur Issayev
Title: Risk Measurement at and for Different Frequencies
Supervision Team: John Cotter, UCD / Pierpaolo Dondio, TU Dublin / Tom Conlon, UCD
Description: There are many risk events associated with trading that have affected markets, traders and institutions. These can occur very quickly or evolve more slowly over longer horizons. A common feature of these events is a lack of anticipation of the magnitudes of losses and the lack of controls in place to provide protection. A further common feature is that these can be large scale events that are very costly and often systemic in nature. This project will apply alternative risk measures in setting margin requirements for future trading, capital requirements for trading, and price limits and circuit breakers, to protect against extreme price/volume movements. The project will employ AI/ML techniques along with other econometric principles, to risk measurement and management. This project will look to identify strengths and weaknesses in applying AI/ML approaches in modelling financial risk, and especially systemic risk.
Student Name: Rhys Jacka
Title: Perspective taking in multiparty human-machine dialogue
Supervision Team: Benjamin R. Cowan, UCD / Vincent Wade, TCD
Description: Work on human-machine dialogue suggests that people take their partner’s perspective into account when interacting with speech based automated dialogue partners. Perspective taking more generally is seen as critical to successful communication. Through smartspeakers, speech agents have become devices that hold conversations with multiple users in interaction. This move from dyadic to multiparty dialogue is likely to be accelerated as agents take a more proactive approach to engaging users in dialogue, becoming member of a team rather than the sole target of the interaction. This PhD aims to identify how perspective taking mechanisms manifest in mixed human-speech agent teams, and how this influences user language choices when engaging with the speech agent.
Student Name: Priyansh Jalan
Title: Appearance Transfer for Real Objects in Mixed Reality
Supervision Team: John Dingliana, TCD / Cathy Ennis, TU Dublin
Description: Research in mixed reality is largely concerned with rendering virtual objects so that they appear plausibly integrated within a real environment. This project investigates the complementary problem of modifying the appearance of the real environment as viewed through a mixed reality display. For instance, a physical wall might be virtually removed so that objects can be seen through or embedded within it. However, merely removing existing surfaces may appear implausible, instead simulated geometry of a hole could be inserted to create the appearance of sections being cut away, or the object re-rendered as refractive glass so that we see through the surface but retain an understanding of the original geometry. The problem is particularly challenging in the context of modern Optical See-Through (OST) MR displays, such as Microsoft’s Hololens, where the real environment is seen directly through a transmissive screen, limiting the degree to which we can change its appearance.
Student Name: Peterson Jean
Title: Empowering older adults to engage with their own health data using multimodal feedback
Supervision Team: Emma Murphy, TU Dublin / Enda Bates, TCD
Description: Health data from physiological sensors is often conveyed to users through a graphical interface but this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Real-time user feedback may not be conveyed easily from sensor devices through visual cues alone, but auditory and tactile feedback can provide immediate and accessible cues from wearable devices. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. This research will involve an exploration of the potential of multimodal cues to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate etc. By creating innovative and inclusive user feedback it is more likely that users will want to engage and interact with new devices and with their own data.
Student Name: Assim Kalouaz
Title: VR for good: Exploring the active mechanisms underlying the use of Virtual Reality to prompt positive mood and well-being
Supervision Team: Brendan Rooney, UCD / Pamela Gallagher, DCU
Description: Numerous studies report examples of virtual reality experiences being used to prompt positive emotions and improve health and well-being. Yet little is understood about the way in which such positive psychological outcomes are designed into the virtual experiences – what are the active mechanism by which such immersive experiences can bring about positive emotions? This study explores the way in which the design of virtual reality experiences interact with individual characteristics of the user to impact perception, attention, emotion and mood, empathy, and well-being. In order to do so, the study will identify and refine effective measures to be used in the research (these may include using self-report, physiological and cognitive performance measures). Then building on the exploration of prospective mechanisms, the study will field test some candidate experiences to prompt positive outcomes.
Student Name: Maziar Kanani
Title: Executable Design: AI Tools for Music
Supervision Team: James McDermott, UoG / Seán O’Leary, TU Dublin
Description: This project is about new AI-enabled creative tools for musicians and new creative ways for music consumers to interact with music. Modern AI methods can generate plausible-sounding music, at least in some cases. However, AI has not achieved anything like “understanding” of the internal structures, relationships and patterns which are deliberately designed by human composers. This understanding can best be represented by short programs, since (by the Church-Turing thesis) no other representation can be more powerful. Short programs are those which best capture all possible regularity and structure. We will use modern metaheuristic search methods such as genetic programming, and neural program synthesis, to automatically create short programs which, when executed, output given pieces of pre-existing music. Simple manipulations of these programs will give natural variations and extensions of the music, enabling exciting new tools for creativity.
Student Name: Sajjad Karimian
Title: Integrating Human Factors into Trustworthy AI for Healthcare
Supervision Team: Rob Brennan, UCD / Siobhán Corrigan, TCD
Description: This PhD will explore the gap between Trustworthy Artificial Intelligence (TAI) guidelines and what is needed in practice to build trust in a deployed, AI-based system so that it is effective. It will seek new ways to measure, quantify and influence TAI system development in an organisational context. It will study socio-technical systems including AI components to make them more trusted and effective. It is an interdisciplinary topic drawing on both the Computer Science and Psychology disciplines. This PhD will partner with a National healthcare data analytics platform deployment to explore and define the factors required to assure trust in the system when AI components are deployed. Stakeholders will include: patient safety and quality monitoring professionals, clinicians, patients and the general public. It will investigate what are the key social and technical factors in deploying such a platform to increase trust, accountability, transparency and data altruism?
Student Name: Abdullahi Abubakar Kawu
Title: Exchanging personal health data with electronic health records: A standardized information model for patient generated health data
Supervision Team: Dympna O’Sullivan, TU Dublin / Lucy Hederman, TCD
Description: As healthcare technologies evolve, the management of health data is no longer only clinician-governed but also patient-controlled. Engagement with consumer health IT has been augmented by Internet of Things-based sensing and mobile health apps. Until recently, Electronic Health Records were seen as the main vehicle to drive healthcare systems forward, however a vital role is increasingly played by patients in controlling their own health information and self-managing their diseases. The objective of this research is the development of a novel middleware information model to facilitate better interoperability and exchange of patient generated health data between patients and providers. Key research challenges include the development of a conceptual architecture of an interoperable solution, semantic representation to enable data to be mapped to standardized biomedical vocabularies such as SNOMED-CT, syntactic representation to conform to healthcare standards such as HL7 FHIR and privacy and security requirements for transferring and storing personal health data.
Student Name: Bilal Alam Khan
Title: Driver sensing & inclusive adaptive automation for older drivers
Supervision Team: Sam Cromie, TCD / Maria Chiara Leva, TU Dublin
Description: The proportion of over 65s in the population is growing; by 2030 a quarter of all drivers will be older than 65. At the same time transport is being transformed with connected, automated and electric vehicles. This project will take a user-centred design approach to understanding the needs of older drivers exploring how these could be addressed through driver sensing and adaptive automation. Progress beyond the state of the art will include a technology strategy for inclusive personalized multimodal Human Machine Interface (HMI) for older drivers and an inclusive standard for driver sensing and/or HMI for connected and autonomous vehicles.
Student Name: Malik Awais Khan
Title: Multi-Modal Age Assurance in Mixed Reality Environments for Online Child Safety
Supervision Team: Christina Thorpe, TU Dublin / Peter Corcoran, UoG
Description: This PhD research project aims to create a multi-modal interface for age assurance in mixed reality environments for online child safety. The solution will incorporate machine learning, computer vision, NLP, and biometric analysis to analyse physical attributes, contextual information, and biometric data of the user for accurate age verification while preserving privacy. The project has significant potential to improve online child safety by providing a reliable and precise means of age verification, ensuring that children are not exposed to inappropriate content or interactions with online predators. The project will also develop the candidate’s skills in digital platform technologies such as HCI and AI, data curation, and privacy-preserving algorithms. Overall, the project aims to make a notable contribution to the field of online child safety through the creation of an innovative age assurance solution.
Student Name: Md Raqib Khan
Title: Stereo Matching and Depth Estimation for Robotics, Augmented Reality and Virtual Reality Applications
Supervision Team: Subrahmanyam Murala, TCD / Peter Corcoran, UoG / Carol O’Sullivan, TCD
Description: Stereo matching and depth estimation are crucial tasks in 3-D reconstruction and autonomous driving. The existing deep-learning approaches gain remarkable performance over traditional pipelines. These approaches have improved performance on difficult depth estimation datasets but have limited generalized performance. Further, advanced computer vision applications such as augmented reality and virtual reality demand real-time performance. In order to achieve this and overcome the limitations of existing learning-based approaches, this project will involve the design and development of learning based methods for stereo matching and depth estimation, with the goal of developing lightweight deep learning models for real-time depth estimation for AR, VR and robotics applications.
Student Name: Daragh King
Title: [TBD]
Supervision Team: Vasileios Koutavas, TCD / Liliana Pasquale, UCD
Description: [TBD]
Student Name: Kristina Koch
Title: Authenticity in Dialogue
Supervision Team: Carl Vogel, TCD / Eugenia Siapera, UCD
Description: Authenticity in communication is of utmost importance to those who attempt to feign authenticity and is also relevant to those who would prefer to banish inauthenticity, whether the sphere is public relations, politics, health care, dating or courts of law. Dialogue interactions in multiple modalities will be analyzed with the aim of identifying features that discriminate authentic and pretended engagement. The work will involve assembling a multi-modal corpus of evidently un-scripted dialogue interactions, annotation with respect to authenticity categories of interest, and analysis through combinations of close inspection, semi-automated processing and data mining to identify features that separate authentic and inauthentic dialogue communications.
Student Name: Hubert Kompanowski
Title: Cross-Modality Generative Shape and Scene Synthesis for XR applications
Supervision Team: Binh-Son Hua, TCD / Hossein Javidnia, DCU / Carol O’Sullivan, TCD
Description: In the modern era of deep learning, recent developments in generative modeling have shown great promises in synthesizing photorealistic images and high-fidelity 3D models with high-level and fine-grained controls induced by text prompts via learning from large datasets. This research project aims at investigating generating 3D models from a cross-modality perspective, developing new techniques for realistic image synthesis and 3D synthesis that can serve as the building blocks for the next generation of 3D modeling and rendering tools. Particularly, we will target high-quality 3D model synthesis at object and scene level, investigating how generative adversarial neural networks and diffusion models can be applied for generating high-fidelity and realistic objects and scenes. As proof-of-concept applications, we will apply the developed techniques to rapid modeling of 3D scenes for AR/VR applications.
Student Name: Sanjay Kumar
Title: Computational argumentation theory for the support of deliberation processes in online deliberation platforms
Supervision Team: Jane Suiter, DCU / Luca Longo, TU Dublin
Description: This project focuses on argumentation, an emerging sub-topic of artificial intelligence aimed at formalising reasoning under uncertainty with conflictual pieces of knowledge. The project will deploy argumentation theory, a paradigm for implementing defeasible reasoning, with the key focus on conflict resolution. In recent years deliberation has emerged as a key component of democratic innovation enabling decision making and reason giving, helping the public make more enlightened judgements and act accordingly. This is an inter disciplinary project which will utilise this computational approach to examine real-world deliberative public engagement activities including key citizens’ assemblies. In detail, the aim of this research is to deploy formal argumentation in the modelling and analysis of deliberative discourses of people in deliberative events, for example, including discussion on the topics of climate change or immigration. Thus from a collection of conflictual points of views of humans, this will enable the automatic extraction of the most deliberative views on a topic. These can then be justified and explained to the public, in turn supporting understanding and decision-making.
Student Name: Sibéal Ua Léanacháin
Title: Value-led design for personalized enhanced reality applications
Supervision Team: Heike Felzmann, UoG / Marguerite Barry, UCD
Description: This project will investigate the ethical, legal and HCI aspects associated with the personalisation of enhanced reality and virtual reality applications, with the aim to identify relevant concerns and develop potential solutions for problematic uses of those technologies. The project will draw on use cases from UoG groups in the field of eHealth and smart city applications from a Value-Sensitive Design perspective. It aims to identify relevant value-related concerns for specific applications and explore the potential generalizability to other application domains in the field of enhanced and virtual reality.
Student Name: Jiawen Lin
Title: Improving Open Domain Dialogue Systems and Evaluation
Supervision Team: Yvette Graham, TCD / Benjamin R. Cowan, UCD
Description: Do you wish Alexa was more reliable, entertaining and funny? Dialogue systems, such as Alexa, are currently incapable of communicating in a human-like way and this is one of the grandest challenges facing Artificial Intelligence. This project involves developing new approaches to dialogue systems that will allow the systems we interact with every day to become more personable and easier to communicate with. The focus will be on examining how existing dialogue systems work and where they need improvement. The project will also look at developing ways of giving systems more personality, making them better at responding to instructions and even making them more entertaining for users to interact with.
Student Name: Zhaofeng Lin
Title: Multimodal and Agile Deep Learning Architectures for Speech Recognition
Supervision Team: Naomi Harte, TCD / Robert Ross, TU Dublin
Description: Speech recognition is central to technology such as Siri and Alexa, and works well in controlled environments. However, machines still lag behind humans in our ability to seamlessly interpret multiple cues such as facial expression, gesture, word choice, mouth movements to understand speech in more noisy or challenging environments. Humans also have a remarkable ability to adapt on the fly to changing circumstances in a single conversation, such as intermittent noise or speakers with significantly different speaking styles or accents. These two skills make human speech recognition extremely robust and versatile. This PhD seeks to develop deep learning architectures that can better integrate the different modalities of speech and also be deployed in an agile manner, allowing continuous adaptation to external factors. These two aspects are inherently intertwined and are key to developing next-generation speech recognition solutions.
Student Name: Michela Lorandi
Title: Using Disentangled Language Learning for Stylistically and Semantically Controllable Language Generation
Supervision Team: Anya Belz, DCU / Sarah Jane Delany, TU Dublin
Description: Natural Language Generation (NLG) has made great strides recently owing to the power of transformer language models (LMs). The best generators produce text on a par with high-quality human writing, but have a very high carbon footprint and are marred by hallucinated content, and logical and factual errors. The general question addressed by this thesis proposal is: to what extent can LM learning be disentangled so that different aspects of language can be learnt and retained separately, potentially enabling (i) more direct control over meaning and style, (ii) different systems to share generic knowledge, and (iii) a system to generate the same meaning in any number of different idiolects,
among other benefits, including a reduction in energy requirements. The work will start with experiments (i) generalising basic disentangling language-GANs, and (ii) adapting state-of-the-art text-to-image techniques capable of controlling style and semantics of generated images via
autoregressive transformers.
Student Name: Luz Alejandra Magre Colorado
Title: Smart garments for immersive home rehabilitation using AR/VR
Supervision Team: Shirley Coyle, DCU / Carol O’Sullivan, TCD
Description: Smart garments provide a natural way of sensing physiological signals and body movements of the wearer. Such technology can provide a comfortable, user friendly interface for digital interaction. This project aims to use smart garments for gesture and movement detection to enhance user interaction within augmented and virtual reality systems. Applications for this research include home-based rehabilitation systems for recovery from stroke, traumatic brain injury or spinal cord injuries. The goal will be to improve adherence by making exercise programs more engaging for the user while at the same time gathering valuable information for therapists regarding the user’s performance and recovery over time. The novel combination of smart garments with AR/VR environments will promote a greater level of immersion into the virtual world beyond conventional controls and create a novel approach to human machine interfacing which is user focused.
Student Name: Anna-Lisa Mann
Title: Beyond CBT: innovative exploration of digital support for psychological therapies
Supervision Team: Gavin Doherty, TCD / David Coyle, UCD / Corina Sas, Lancaster University
Description: There has been a huge rise in the research and deployment of digital therapies in the area of mental health and mental wellbeing. Within existing research, there has been an understandable focus on digital support for Cognitive Behavioural Therapy (CBT), due to the strong evidence base, it being widely used, and relatively structured, and hence amenable to digital support and delivery. However, CBT will not be suitable or effective for everyone, and there are many other evidence-based therapies available. Alternatives inspired from third wave therapeutic approaches such as compassion therapy, emotion-focused, or mindfulness-based therapy, go beyond the cognitive and behavioural aspects addressed in CBT, to support key emotional and bodily symptoms linked to emotional wellbeing and mental health. However, there has been limited work exploring how these newer types of interventions might be delivered or supported with digital technologies. This PhD project will focus on exploring alternative evidence-based therapies and how these could be digitally supported.
Student Name: Vladimir Marochko
Title: A novel framework for neuroscience and EEG event-related potentials with virtual reality and deep autoencoders
Supervision Team: Luca Longo, TU Dublin / Rachel McDonnell, TCD
Description: Electroencephalography (EEG) and its applications within Human-Computer Interaction and for brain-computer interface development (BCI) is increasing. Within neuroscience, there is an increasing emphasis on the analysis of event related potentials (ERP) for brain functioning in ecological contexts rather than exclusively in lab-constrained environments. This is because humans might not behave naturally in controlled settings, thus influencing the reliability of findings. However, EEG studies performed in natural/ecological settings are more problematic than in controlled settings, because the control of EEG equipment by researchers is inferior. For these reasons, a new trend is devoted to the application of Virtual Reality (VR) in the context of ERP research, for the development of ecological virtual environments similar to real ones. The advantage is that a traditional ERP study can still be performed in supervised settings, but giving the researcher a full control over experimental factors and EEG equipment. This PhD will produce a novel framework that will allow scholars to perform ERP-based research in ecological settings, by employing VR, and by constructing autoencoders, fully unsupervised deep-learning methods for automatic EEG artefact reduction, by taking advantage of not only temporal dynamics of EEG, but also their spatial and frequency-domain characteristics.
Student Name: Margot Masson
Title: Could you please repeat that? Deep learning non-native speech patterns
Supervision Team: Julie Berndsen, UCD / Anthony Ventresque, UCD / Naomi Harte, TU Dublin
Description: Voice interfaces have become pervasive in our daily lives with industry now looking to further transform user voice-experiences. The success of the big players has been due to the enormous amount of data they now have available which can be exploited by deep learning technologies. While speech recognition is often regarded to be a “solved problem”, non-native and accented speech (i.e. regional variation, dialect) continues to be problematic primarily due to the lack of data. This PhD project draws together natural language processing, speech technologies, quality assessment and AI applying deep learning to the identification of native/non-native speaker models for personalised interactive language education and assessing the performance of speech recognition systems in order to provide feedback to developers of such systems.
Student Name: Farzin Matin
Title: ASD Augmented: Influencing pedagogical perspectives and practices
Supervision Team: Eleni Mangina, UCD / Aljosa Smolic, TCD
Description: This project begins with the hypothesis that the emerging technology of Augmented Reality (AR) will influence the pedagogical perspectives and practices for students with ASD. Research studies indicate that students with autism choose majors in Science, Technology, Engineering and Maths (STEM) at higher rates than students in the general population. They are “looking for patterns, and in Science it is natural to look for patterns that reflect natural law”. The aim is to identify the impact of AR in concentration for students diagnosed with ASD.
Student Name: Muhammad Hani Menazel Al Omoush
Title: Immersive Mathematics – Design for All
Supervision Team: Monica Ward, DCU / Emma Murphy, TU Dublin / Tracey Mehigan, UCC
Description: Mathematical equations are an essential component of education for students across all levels however they are not always presented to students in a manner that suits their individual needs. For example, students with dyslexia may struggle with symbols but benefit from spatial representations whereas for blind students, spatial layout becomes a barrier. To address these and other issues, this project explores the potential of multimodal feedback in immersive environments for personalised presentation of mathematics via an Artificial Intelligent Educational (AIEd) framework for automatic adaptation to learner, capability and learning situation.
Student Name: Kamran Mir
Title: Mapping the analysis of students’ digital footprint to constructs of learning
Supervision Team: Geraldine Gray, TU Dublin / Tracey Mehigan, DCU / Ana Schalk, TCD
Description: This proposal explores the importance of learning theories in informing the objective evaluation of learning practice, as evidenced by the analysis of multimodal data collected from the eclectic mix of interactive technologies used in higher education. Frequently, learning analytics research builds models from trace data easily collected by technology, without considering the latent constructs of learning that data measures. Consequently, resulting models may fit the training data well, but tend not generalise to other learning contexts. This study will interrogate educational technology as a data collection instrument for constructs of learning, by considering the influence of learning design on how learning constructs can be curated from these data. Results will inform methodological guidelines for data curation and modelling in educational contexts, leading to more generalizable models of learning that can reliably inform how we act on data to optimize the learning context for students.
Student Name: Peshawa Mohammed
Title: Linked geospatial data supporting Digitally Enhanced Realities
Supervision Team: Santos Fernández Noguerol, TU Dublin / Declan O’Sullivan, TCD / Avril Behan, TU Dublin
Description: As with many other complex multi-property environments, navigation around healthcare campuses is a significant challenge for a variety of stakeholders both during everyday usage (by clients, visitors, healthcare professionals, facility managers, equipment and consumables suppliers, and external contractors) and during design for construction and redesign for renovation/retrofit. This project will progress the integration of the currently diverse and unconnected geospatial, BIM and other relevant data to deliver better return on investment for both operational and development budget holders, while also developing the research capabilities of graduates and the organisations with whom this project engages (for example: Ordnance Survey Ireland, HSE).
Student Name: Anwesha Mohanty (graduated, 2024)
Title: Synthetic visual data generation and analysis of Rosacea from limited data
Supervision Team: Hossein Javidnia, DCU / Alistair Sutherland, DCU / Rozenn Dahyot, Maynooth University
Description: Skin diseases are now increasing globally. A fast, accurate and low-cost system for diagnosis would be very beneficial, especially in developing countries. The accurate detection of skin lesions, inflammation and the different subtypes of diseases such as rosacea and seborrheic dermatitis is vital for early treatment and medication. In this research, a triple stage approach will be carried out, which focuses on 3D Computer Vision, Image Processing and Machine learning. The aim of this project is to identify skin disorders for subtypes of rosacea and other skin conditions by establishing an image-based Diagnosis System using 3D Computer Vision, Machine Learning and Artificial Intelligence. The system should be easily usable by both specialist clinicians and by general practitioners (GPs).
Student Name: Kesego Mokgosi
Title: Adaptive Mulitimodal Avatars for Speech Therapy Support
Supervision Team: Robert Ross, TU Dublin / Naomi Harte, TCD / Cathy Ennis, TU Dublin
Description: Pediatric speech and language therapy is a challenging domain that can benefit from effective conversational coach design, and in this research we aim to push the boundaries of virtual agents for speech therapy by investigating methods for speech therapy system development so as to make the system not only effective at communicating speech therapy goals, but also to provide such instruction in a fluent and approachable avatar that the young service user can engage with. This work will involve systematic research with key partners including therapists and end users, as well as the development of prototype personas and strategies for a conversational speech therapy system to supplement in-clinic care. This work is well suited to a Computer Scientist who has experience in either virtual character design or conversational system development. The ideal candidate will also have an interest in user studies and health care.
Student Name: Théo Morales (passed viva, 2024)
Title: Hand-object manipulation tracking using computer vision
Supervision Team: Carol O’Sullivan, TCD / Gerard Lacey, TCD / Alistair Sutherland, DCU
Description: Current hand tracking for VR/AR interfaces focuses on the manipulation of virtual objects such as buttons, sliders and knobs. Such tracking is most often based on tracking each hand independently and when hands become partially occluded or are grasping a real object the hand tracking often fails. Tracking the hands during the manipulation of real-world objects opens up AR/VR to much richer forms of interaction and would provide the basis for activity recognition and the display of detailed contextual information related to the task at hand. This PhD project involves researching the tracking of unmodified hands with an ego-centric camera (2D and 3D) in the presence of partial occlusions. Technologies will include the use of deep learning models in combination with 3D models to determine hand pose in the presence of occlusion. Our approach will also exploit high level knowledge about object affordances and common hand grasp configurations which is commonly used in Robotic grasping.
Student Name: Yasmin Moslem (graduated, Apr. 2024)
Title: MT system selection and recycling/fixing recycling candidates in a hybrid set-up
Funding: This PhD is sponsored by Microsoft Ireland Research
Supervision Team: Andy Way, DCU / John Kelleher, TUD
Description: Domain-tuned MT systems outperform general domain MT models, when they are used to translate in-domain data. It may not always be known in advance of translation time which domain is best suited to a particular text or sentence, and even for a known domain like software, some strings may be better translated by a general domain system. This gives rise to a number of research questions, including: Given multiple domain-tuned NMT systems, and translation candidates, how do we analyze an incoming string and determine which system will do the best translation at runtime? How do we best assess which translation candidate is the best choice? What are the best approaches for NMT? Also, if we have access to recycling (in a Translation Memory), when is a recycling match better than an MT candidate? Can NMT help fix high quality TM matches? Can a better translation candidate be found by combining elements of multiple translations, from recycling and MT systems? Can post-editing data be leveraged, e.g. a form of automatic post-editing approach?
Student Name: Hrishikesh Mulay
Title: Annotated lip reading for Augmented Educational Systems
Supervision Team: Eleni Mangina, UCD / Sam Redfern, UoG
Description: This project begins with the hypothesis that emerging technologies (Augmented and Virtual Reality – AR/VR) will influence the pedagogical perspectives and practices for students with literacy problems. A review of the literature has shown that the lip-reading system remains a challenging topic due to the lack of an annotated dataset that is related to this domain. For example, with automatic speech recognition and speech analysis, there is a need a comprehensive dataset, which is expensive as well as time-consuming to gather. This project considers embedding a 3D avatar within an augmented educational system, with the capacity of semi-supervised learning, to generate and manipulate fabricated data from the real input data. The aim is to identify how AR or VR (as an alternative) can assist with the collection of data through an immersive environment.
Student Name: Prashanth Nayak (graduated, Apr. 2024)
Title: Targeted Improvements for Technical Domain Machine Translation
Funding: This PhD is sponsored by Microsoft Ireland Research
Supervision Team: Andy Way, DCU / John Kelleher, TU Dublin
Description: Neural MT (NMT) offers significant improvements in overall translation quality in recent years, but even the latest models struggle with accurately translating brand names and important technical terms. How can accurate translation be ensured for brand names and terms with known approved translations, even if the training data contains alternative translations? Can contextual clues be used to force the correct translation of ambiguous terms? This PhD will focus on exploring how improved term translation can be integrated within a general domain NMT model, to make targeted improvements to the overall translation quality. The main application area is MT for custom domains, such as information technology and software localisation.
Student Name: Iqra Nosheen
Title: Virtual Reality for Robust Deep Learning in the Real World
Supervision Team: Michael Madden, UoG / Cathy Ennis, TU Dublin
Description: There have been notable successes in Deep Learning, but the requirement to have large, annotated datasets creates bottlenecks. Datasets must be carefully compiled and annotated with ground-truth labels. One emerging solution is to use 3D modelling and game engines such as Blender or Unreal to create realistic virtual environments. Virtual cameras placed in such environments can generate images or movies, and since the locations of all objects in the environment are known, we can computationally generate fully accurate annotations. Drawing on the separate and complementary fields of experience of the two supervisors, the PhD student will gain a synergy of expertise in Graphics Perception and Deep Learning. This PhD research will investigate questions including: (1) strategies to combine real-world and virtual images; (2) the importance of realism in virtual images; (3) how virtual images covering edge cases and rare events can increase the reliability, robustness and trustworthiness of deep learning.
Student Name: Megan Nyhan
Title: Ethical Recommendation Algorithms: Developing an ethical framework and design principles for trustworthy AI recommender systems
Supervision Team: Susan Leavy, UCD / Josephine Griffith, UoG
Description: AI driven recommendation algorithms are profoundly influential in society. They are embedded in widely used applications such as Instagram and TikTok, disseminating content including social media, video or advertising according to user profiles. However, without appropriate ethical frameworks and design principles, they have the potential to lead to online harm, particularly for vulnerable groups. Ethical issues concern inappropriate content, risks to privacy and a lack of algorithmic transparency. In response, the EU and Irish government are developing regulations for AI. However, given the complex nature of recommender systems, there are significant challenges in translating this into implementable design guidelines and ethical principles. This project will develop an ethical framework and design principles for recommender algorithms ensuring the development of trustworthy recommender algorithms, enabling ethics audits and ultimately, will work to protect users from risks of online harm.
Student Name: Chun Wei Ooi
Title: Enhancing Visual and Physical Interactions in Augmented Reality
Supervision Team: John Dingliana, TCD / Cathy Ennis, TU Dublin
Description: This project deals with advancing the state-of-the-art in the rendering and simulation of high fidelity animated virtual objects in augmented reality (AR) environments. In particular, we will develop novel techniques for improving the perceived realism of interactions between real-world objects and dynamic virtual elements in real-time. To address this problem, we will investigate the use of unified adaptive-level-of detail volumetric models that will serve as proxy geometry for both the real-world environment scanned by the AR system and the virtual objects generated and simulated by the animation system.
Student Name: Alfredo Ormazabal (submitted PhD, 2024)
Title: Incorporating patient-generated health data into clinical records
Supervision Team: Lucy Hederman, TCD / Damon Berry, TU Dublin
Description: Patient-generated health data (PGHD), that is data originating from patients or their carers, not from clinicians, is a growing feature of chronic disease care. PGHD has the potential to impact health care delivery and clinical research. This PhD will focus on informatics aspects of these challenges, exploring how to allow for the incorporation of PGHD in EHRs and clinical systems, taking account of data interoperability issues, ensuring standardisation of the non-clinical data, and the appropriate representation of metadata about quality, governance, provenance. The research will be grounded in the Irish national health system and will seek to align with the national EHR project.
Student Name: Giulia Osti
Title: Customising AI for digital curation work that utilises controlled vocabularies
Supervision Team: Amber Cushing, UCD / Suzanne Little, DCU
Description: Digital curators are the “frontline” practitioners who work to appraise, select, ingest, apply preservation actions, maintain, and then, provide access and use to digital heritage objects for all types of users, from digital humanities scholars to tourists. This work has the potential to benefit from AI technology, particularly computer vision. Ethical issues surround the use of controlled vocabulary classification systems that digital curators utilise to arrange and describe digitised historical photograph collections in heritage institutions. If these ethical concerns are not addressed uptake of AI technology in this sector may be slow or limited. The project will explore the ethical and social context of digital curation work to inform customisation of an AI model for use in the sector. The project will utilise Microsoft Azure Cognitive Services to customise and refine a computer vision (CV) model for use with the Library of Congress subject heading classification system. This position is based at the UCD School of Information & Communication Studies.
Student Name: Ayushi Pandey (graduated, 2024)
Title: Human Speech – How do I know it’s Real?
Supervision Team: Naomi Harte, TCD / Julie Berndsen, UCD
Description: How can you tell when speech is real, or when it is fake? This is the focus of this PhD project and it goes to the very core of the nature of human speech. Directly relating what is observable at a signal level in speech to how natural that signal is, as perceived by a human, is an unsolved problem in speech technology. This PhD addresses this gap in knowledge. The research will leverage a wealth of data from recent Blizzard speech synthesis challenges, where the naturalness of multiple world-class speech synthesis systems has been rated and made publicly available for researchers. Simultaneously, the research will also leverage shared datasets on spoofing from the automatic speaker verification community, such as those available through http://www.asvspoof.org/. The research is truly novel in that it goes beyond treating speech purely as a signal, and will bring the work to the level of investigating naturalness in continuous speech, over many seconds and sentences of generated speech.
Student Name: Cristina Perea del Olmo
Title: Supporting help-seeking and recommendations for mental health in young adults
Supervision Team: David Coyle, UCD / Gavin Doherty, TCD / Marguerite Barry, UCD and Claudette Pretorius, UCD
Description: Seeking help is a critical first step in addressing mental health difficulties. Evidence suggests that positive help-seeking experiences contribute to an increased likelihood of future help-seeking and to improved mental health outcomes. Increasingly help-seeking now starts online. However, help-seeking is a complex process. This project will address known limitations of current online help-seeking technologies, including a tendency towards information overload, medicalized recommendations, and a lack of personalization. It will focus on the help-seek needs of young adults, aged 18-25 and will be undertaken in collaboration with national youth mental health organisations. The aim is to develop guided help-seeking technologies including voice and chat-based agent systems, social help-seeking technologies, and conversational recommender systems. The research will be guided by past research that has emphasised the importance of four key design considerations: support for different levels of human connectedness, accessible and trustworthy information, personalisation that respects autonomy, and the need for immediacy. From a theoretical perspective it will explore how traditional models of help-seeking can be integrated with theories of information search and of engagement in Human Computer Interaction.
Student Name: Stephen Pilli
Title: Nudging Humans To Have Ethically-Aligned Conversations
Supervision Team: Vivek Nallur, UCD / Vincent Wade, TCD
Description: Conversational AI (or chatbots) exists in obvious conversation-enabled devices such as Alexa, Google Assistant, or Siri, but also in devices such as smartphones, smartwatches, FitBit, etc. Given that we spend almost all our entire waking hours in close or constant contact with a smart device, it is trivial for the device to nudge our attention to news/views/ decision-options that it considers important. Nudges are behavioural interventions that arise primarily from human decision-making frailties (e.g., loss-aversion, inertia, conformity) and opportunity seeking. However, not all humans are affected by the same biases, i.e., a nudge that works on one person may not work on another. This project investigates the possibility of a conversational agent nudging the human to use/employ ethically grounded language in conversations and texts. The conversational agent will attempt to deliver nudges in an adaptive manner, with the objective of making the human more ethically aware.
Student Name: Breanne Pitt
Title: Multi-Perspectivity in Next-Generation Digital Narrative Content
Supervision Team: Mads Haahr, TCD / Marguerite Barry, UCD
Description: Stories and storytelling are crucial to the human experience as well as to the creation of meaning individually and socially. However, today’s most pressing issues, such as climate change and the refugee crisis, feature multilateral perspectives with different stakeholders, belief systems and complex interrelations that challenge traditional ways of narrative representation. Existing conventions (e.g., in news and on social media) lack the expressive power to capture these complex stories and too easily become prone to oversimplified presentation of complex material – even fake news – resulting in polarization of populations. Taking its starting point in the System Process Product (SPP) model developed by Koenitz (2015), this research will develop a narrative architecture useful for structuring multi-perspective narrative content and evaluate it through the creation of multi-perspective narratives, at least one of which will be a VR/AR/MR experience.
Student Name: Anastasiia Potiagalova
Title: Conversational Search of Image and Video with Augmented Labeling
Supervision Team: Gareth Jones, DCU / Benjamin R. Cowan, UCD
Description: The growth of media archives (including text, speech, video and audio) has led to significant interest in the development of search methods for multimedia content. A significant and rapidly expanding new area of search technology research in recent years has been conversational search (CS). In CS users engage in a dialogue with an agent which supports their search activities, with the objective of enabling them to find useful content more easily, quickly and reliably. To date, CS research has focused on text archives; this project is the first to explore CS methods for multimedia archives. An important challenge within multimedia search is formation of queries to identify relevant content. This project will seek to address this challenge by exploring the use of technologies from augmented reality to dynamically label images and video displayed within the search process, to assist users in forming more effective queries using a dialogue-based search framework.
Student Name: Darren Ramsook (passed viva, 2024)
Title: Video Coding Artefact Suppression Using Perceptual Criteria
Supervision Team: Anil Kokaram, TCD / Noel O’Connor, DCU
Description: Video traffic accounts for about 70% of all internet traffic now, and predictions are on track for 80% by 2022. Data compression is the only reason that video has not broken the system. However lossy video compression causes artefacts e.g. blocking and contouring which have to be removed by the video player receiving the compressed data. All of the techniques for removing these artefacts currently do not exploit visual quality criteria relevant for humans. This causes a problem for video consumed on different devices. By exploiting the visibility of artefacts on different devices, this project develops new techniques for artefact reduction that are sensitive to the human visual system, hence enabling appropriate video quality/bitrate compromises to be made with different devices.
Student Name: David Redmond
Title: Virtual Reality, User Experience and Psychosocial Outcomes
Supervision Team: Pamela Gallagher, DCU / Brendan Rooney, UCD
Description: To date the application of virtual reality to psychological research and practice predominantly focuses on overall functional and clinical outcomes following a VR assisted intervention. There is enormous scope to learn about how specific aspects of the VR experience both recreationally and during interventions contribute to these positive outcomes. This research will explore how the user-VR interaction aids the exploration of self-conceptions and identity to optimise well-being and personally meaningful outcomes. This project will explore (1) whether and how specific features and aspects of the VR experience (e.g. setting and narrative) impact on psychological wellbeing; (2) what psychological outcomes are impacted (e.g. how does it shape a person’s sense of self and identity); and (3) how can this knowledge of the VR experience be applied to clinical populations (e.g. amputees)?
Student Name: Gearóid Reilly
Title: A Multi-User VR Recreational Space for People with Dementia
Supervision Team: Sam Redfern, UoG / Gabriel-Miro Muntean, DCU / Attracta Brennan, UoG
Description: Dementia is one of the greatest societal and economic health challenges of the 21st century, and a number of research initiatives have proven the usefulness of VR as a therapy tool. Although removing social isolation and supporting re-connection with friends and family are central to improving outcomes for people with dementia, networked VR-based therapy technologies with an emphasis on social activity have not previously been studied. This project will create a multi-user VR space where socialization and social performance are supported. The VR space will be immersive, activity-based and facilitate multi-user interactions enabling the person to engage with a professional therapist, or their friends and family, without the logistical difficulties of physical travel. A number of interactive scenarios will be deployed and validated through user studies. Supervision is by a cross-disciplinary team of computer scientists and nurses.
Student Name: Ali A. Rostam-Alilou
Title: Development of machine learning tools to predict pathological sequelae of traumatic brain injury
Supervision Team: Nicholas Dunne, DCU / Caitríona Lally, TCD / David MacManus, DCU / David Loane, TCD
Description: Traumatic brain injuries (TBI) are one of the leading causes of death and disability worldwide. Currently, there is a huge gap in diagnostic and prognostic technologies for TBI likely due to its multiple biological and biomechanical aspects which can cause different neurological pathologies, impairments, and deficits in people. This multifaceted nature of TBI makes it difficult to predict the pathological outcomes using existing technologies. Indeed, it remains difficult to not only diagnose certain TBIs e.g., concussion, but it is also difficult to determine an accurate prognosis. Therefore, the aim of this research is to develop state-of-the-art computational tools to predict TBI pathologies and provide accurate diagnoses and prognoses. This will be achieved by using TBI pathology data (e.g., MRI) with state-of-the-art computer models of the brain to develop novel computational tools utilising machine learning to determine accurate diagnoses and prognoses of TBI from head impacts e.g., sports-related head impacts, falls.
Student Name: Leona Ryan (passed viva, 2024)
Title: Using digitally-enhanced reality to reduce obesity-related stigma
Supervision Team: Jane Walsh, UoG / Owen Conlan, TCD
Description: Weight-related stigma is well established as a pervasive feature of societies and predicts higher risk of depression, anxiety, and suicidality, as well as greater risk of inflammation and chronic disease. Medical professionals consistently display high levels of anti-obesity bias, assume obesity suggests patient non-compliance, and admit they would prefer to avoid dealing with obese patients at all. A huge industry now exists around overcoming obesity and supporting weight management. However, much of the research suggests that reducing stigma will have a significantly greater impact on rates of obesity. The present study proposes to develop, deliver and evaluate an evidence-based VR intervention to foster empathy and reduce obesity-related stigma in target groups (e.g. medical students). This will be achieved by synergising current psychological research on empathy and stigma with state-of-the-art VR technologies. Intervention content will be developed using the ‘person-centred approach’ and outcomes assessed will include both psychological and behavioural indicators of success.
Student Name: Jeffrey Sardina
Title: Improving usability (for developers) of the interface between knowledge graphs and machine learning
Supervision Team: Declan O’Sullivan, TCD / John Kelleher, TU Dublin / Fergal Marrinan, SONAS Innovation
Description: Knowledge Graphs (KGs) have been successfully adopted in many domains, both in academia and enterprise settings–enabling one to integrate heterogeneous data sources to facilitate research, business analytics, fraud detection, and so on. Increasingly they are being used by Machine Learning algorithms through Knowledge Graph embeddings. However few environments exist that aid practitioners in either discipline to easily interface the two technologies together, e.g. Machine Learning experts to easily explore and produce Knowledge Graph Embeddings; or Knowledge Graph engineers to easily prepare access to their knowledge graphs to suit particular machine learning algorithms. The PhD will thus identify, propose, and develop an approach for allowing machine learning experts to engage with knowledge graphs, and enable knowledge graph engineers to interface their graphs tailored to target machine learning algorithms being used. In order to provide context for the research, health domain will be used, specifically the study of Cancer, where the bringing together of Machine Learning and Knowledge Graphs is desired and already being progressed. This PhD will be aligned with the sponsorship by Sonas Innovation (http://sonasi.com) of d-real PhDs, and will also benefit from research ongoing within the SFI ADAPT Research Centre at TCD.
Student Name: Martin Schmalzried
Title: Imagining and Designing the Metaverse
Supervision Team: Eugenia Siapera, UCD / Aphra Kerr, TU Dublin / Cathy Ennis, TU Dublin
Description: Interest in the metaverse has increased dramatically since Mark Zuckerberg’s talk in October 2021. By turning to the metaverse, Facebook/Meta indicated a paradigm shift from a platform and social media based internet to an immersive, integrated and experience-based environment. But what precisely the metaverse will be is still undetermined, indicating that the current period will shape its future form. It is therefore important to study the sociotechnical imaginaries around the metaverse as they will end up feeding into relevant policy and to the design of metaverse applications. The project focuses on two key areas, games and health, and seeks to identify the sociotechnical imaginaries of metaverse applications in these areas as they are encountered among different publics, including technology developers, gamers/users, and public bodies. The project explores their views on challenges around user engagement, privacy and other ethical issues, including transparency, human dignity, individual and societal wellbeing, transparency, accountability and non-discrimination. The outcome of the research is expected to include a set of ethical and policy guidelines for the metaverse.
Student Name: Davoud Shariat Panah (graduated, 2024)
Title: Heart health monitoring using machine learning
Supervision Team: Susan McKeever, TU Dublin / Andrew Hines, UCD
Description: The SoundGen project will deliver advanced state-of-the-art techniques for advanced and effective sound generation and mixing. This work is inspired by recent developments in image neural style transfer networks for image mixing. Sounds can be represented as spectrogram images – which have proven effective as representations for sound when used with neural network classifiers. This project proposes to use spectrograms in combination with CNNs that have been trained on a variety of sounds, to discover how specific feature maps of the CNN are associated with aspects of sound – similar as that of image neural style transfer networks.
Student Name: Samuel Veer Singh
Title: Cluster Analysis of Multi-Variate Functional Data through Non-linear Representation Learning
Supervision Team: Mimi Zhang, TCD / Shirley Coyle, DCU
Description: Wearable sensors provide a continuous and unobtrusive way to monitor an individual’s health and well-being in their daily lives. However, interpreting and analyzing wearable sensor data is challenging. One important technique for analyzing such data is cluster analysis. Cluster analysis is a type of unsupervised machine learning that involves grouping data points into clusters based on their similarities. In the context of wearable sensor data, this can involve grouping together measurements of physiological parameters such as heart rate, respiratory rate, and activity level, as well as environmental data such as temperature and humidity. This project involves working on the cutting-edge of cluster analysis methods for sensor data. Different from traditional machine learning methods (for multivariate data), we will develop functional data clustering methods, motivated by the fact that sensor data can be naturally modelled by curves that denote continuous functions of time.
Student Name: Daniel Snow
Title: Data Aware Design: A framework for understanding human-data interaction in HCI and AI
Supervision Team: Marguerite Barry, UCD / Aphra Kerr, TU Dublin / David Coyle, UCD / Catherine Mooney, UCD / Dave Lewis, TCD / Declan O’Sullivan, TCD
Description: Many everyday digital interactions are reliant on personal data being processed, often passively and without user awareness, to allow for health tracking, social media and digital content delivery, voice assistant services, etc. The dependence of these applications on data and machine learning has created an urgent need for data aware design in computer science education, and in human computer interaction (HCI) and AI research. This PhD project will investigate how application designers understand and design for data use across services that use AI. Using qualitative methods, including interviews with practitioners, it will explore human-data interaction (HDI) and human computer interaction (HCI) theory and methods for understanding data interactions. The aim is to develop an interdisciplinary framework for data aware design in teaching and research, to support understanding between and to promote human-centred AI application design. This project would suit applicants interested in qualitative research for understanding people and practices.
Student Name: Mayank Soni
Title: Adaptive Dialogue in Digital Assistants for Purposeful Conversations
Supervision Team: Vincent Wade, TCD / Benjamin R. Cowan, UCD
Description: Chatbots and Intelligent Assistants are becoming evermore ubiquitous, as natural language human-machine interfaces and are supporting a range of tasks – from information requests to commercial transactions. Although more challenging, there is growing interest in systems which can also interact in a social fashion, building a relationship with a user over time through natural seeming talk, while embedding practical tasks within this matrix of conversation. The project will investigate and implement techniques and technologies which will allow systems to seamlessly transition between topics (and the underlying domains), passing control of dialog between federated dialog managers, each trained on different domains.
Student Name: Edward Storey
Title: My voice matters – extending high performance speech interfaces to the widest possible audience
Supervision Team: Naomi Harte, TCD / John McCrae, UoG
Description: The performance of speech interfaces continues to improve at pace, with users now able to engage with technology such as Google Duplex to automatically book a restaurant. A person’s ability to enter a world full of speech-interface driven technology depends directly on whether that technology works well for their own speech. Many users, such as those with speech impediments, the elderly, young children, and non-native speakers can become excluded. This PhD will explore ways to improve performance in speech interfaces for marginalised users. A fundamental understanding of how speech from these users is different gives us the best opportunity to guide deep-learning systems to solutions that serve a wider range of speakers. We need to discover what, and how, DNNs learn from speech, and leverage this to develop models with a greater ability to understand less-encountered speaking styles. This PhD will contribute fundamental ideas both in speech understanding, and in interpretable and adaptable AI. This PhD will be aligned with the sponsorship by Sonas Innovation (http://sonasi.com) of D-REAL PhDs, and will also benefit from research ongoing within the SFI ADAPT Research Centre and the Sigmedia Research Group at TCD.
Student Name: Xiaohan Sun
Title: Neuropostors: a Neural Rendering approach to Crowd Synthesis
Supervision Team: Carol O’Sullivan, TCD / Sam Redfern, UoG
Description: In computer graphics, crowd synthesis is a challenging problem due to high computational and labour costs. In this project, we propose to harness new developments in the field of Neural Rendering (NR) and apply novel machine learning methods to this problem. Building on initial results, the student will a) implement a novel hybrid image/geometry crowd animation and rendering system (Neuropostors) that uses new NR methods to facilitate limitless variety with real-time performance; and b) conduct a thorough set of quantitative and qualitative experiments, including perceptual evaluations, to drive the development of, and evaluate, the system.
Student Name: Arthit Suriyawongkul
Title: Modelling Purpose and Responsibility for Federated Governance of Data Sharing
Supervision Team: Dave Lewis, TCD / Rob Brennan, UCD / Aphra Kerr, Maynooth University
Description: Data sharing for AI training needs transparent governance and responsibilities. This research will develop semantic models for machine reasoning to help parties decide on data sharing agreements, e.g. for text, speech and video data to train medical chatbot agents. It will model data’s: personal information content; intended use; scope of processing and sharing; governance procedures; ownership rights; security protocols and quality assurance liabilities.
Student Name: Allassan Tchangmena A Nken
Title: Multimodal Federated Learning Approach for Human Activity Recognition in Privacy Preserving Videos (FLARE)
Supervision Team: Ihsan Ullah, UoG / Susan Mckeever, TU Dublin / Michael Schukat, UoG and Peter Corcoran, UoG
Description: The United Nations reported a rapid rise in the numbers of people living well beyond retirement age. Older adults wish to maintain a high-quality independent lifestyle without the need for high-cost medical/care interventions. Several technology-based solutions use machine-learning e.g., human activity recognition (HAR) systems which focus on monitoring pure health conditions, but it is largely known that wellbeing is a much more subjective and complex concept. Recent state-of-the-art machine-learning algorithms are trained on large amounts of centrally stored data which is hard for various reasons e.g., privacy loss, load on network while data transfer, General-data-protection-rules restrictions. More specifically, due to privacy concerns such solutions face acceptability barriers because of being considered too invasive. This project aims to address acceptability problem and better results in HAR via the use of by default privacy preserving imaging types (e.g., non-RGB (face unrecognisable)) and federated learning approach (data remains at owner place).
Student Name: Eléa Thuilier
Title: AR/VR pose tracking for task-oriented physical therapy training for the treatment of osteoporosis
Supervision Team: Attracta Brennan, UoG / John Dingliana, TCD / John Carey, Univ. of Galway / Mary Dempsey, Univ. of Galway
Description: AR/VR based therapy is successful in rehabilitation whilst decreasing recovery time and cost. Meanwhile, the incorporation of gaming into AR/VR rehabilitation demonstrates greater patient motivation and adherence to therapy treatments. Despite the fact that osteoporosis is one of the greatest societal and economic health challenges today, there is a lack of research on AR/VR task-oriented physical therapy training for people with osteoporosis. In this project, the student will investigate the use of AR/VR pose tracking for task-oriented physical therapy training for the treatment of osteoporosis. The student will design and build full body tracking AR/VR games fusing off-the shelf commodity hardware and IMU-based sensors. These games will target the areas most vulnerable to osteoporosis-fracture (i.e. hip, spine, shoulder and arms); explore the effectiveness of different games and exercises according to a therapeutic configuration on a person with osteoporosis.
Student Name: Duyen Tran (passed viva, Feb. 2024)
Title: Next Generation Search Engines
Supervision Team: Cathal Gurrin, DCU / Owen Conlan, TCD
Description: The current approach to web search is based on a decades old model of information retrieval in which the user converts an information need into a textual query and browses a result list that is minimally personalized by ranking algorithms operating over sparse personal data. Additionally, the current models are designed as closed-loop systems with the search provider having control of the user model and monetising user profiles without any involvement of, or value for, the user. With large volumes and new forms of personal data being gathered nowadays, there is a massive opportunity in this project to look beyond the current approach to web search and develop a Next Generation Search Engine that puts the user profile at the centre of the ranking algorithms, and moreover, allows the user to control how their personal profile data is used by the search engine.
Student Name: Ekaterina Uetova
Title: Computerized support for the Patient Generated Health Data lifecycle
Supervision Team: Dympna O’Sullivan, TU Dublin / Lucy Hederman, TCD / Damon Berry, TU Dublin
Description: Patient generated health data (PGHD) is increasing in importance with more patients self-managing their conditions. Integrated PGHD including enhanced computational support for acquisition, analytics and actions can provide a holistic view and facilitate shared decision making. There are numerous challenges – PGHD is large in volume and variety, originating from medical devices, sensors and apps. Multiple standards leads to semantic and syntactic incompatibilities. Sociotechnical factors – adherence to self-management regimes, adoption of technology and perceived trustworthiness of PGHD are important considerations. This research will design, develop and evaluate a digital platform for the PGHD lifecycle and focus on the following challenges;
– Understanding the requirements of patients and clinicians for self-management;
– Developing computerized support to capture and represent PGHD;
– Developing methods for analysis of PGHD that can support clinicians from a remote management perspective while also supporting patients with self-management;
– Evaluating the platform with patients and clinicians from HSE living labs.
Student Name: Sami Ul Haq
Title: Towards context-aware evaluation of Multimodal MT systems
Supervision Team: Sheila Castilho, DCU / Yvette Graham, TCD
Description: Context-aware machine translation systems have been raising interest in the community recently. Some work has been done to develop evaluation metrics to improve MT Evaluation considering discourse-level features, context span and appropriate evaluation methodology. However, little has been done to research how context-aware metrics can be developed in the case of multimodal MT systems.
Multimodal content refers to documents which combine text with images and/or video and/or audio. This has a wide range from almost all the web content we view as part of almost all our online activities to much of the messaging we send and receive on WhatsApp and Messenger systems. This project will investigate whether other inputs such as images can be considered as context in the evaluation (along with text) for evaluation of translation quality, and if so, how automatic metrics to account for that multimodal nature can be developed. It will implement document- and context-level techniques being developed for automatic metrics in multimodal MT making use of the multimodal context needed in a multimodal MT scenario.
Student Name: Arjun Vinayak Chikkankod
Title: Modeling cognitive load with EEG and deep learning for human computer interaction and instructional design
Supervision Team: Sarah Jane Delany, TU Dublin / Ann Devitt, TCD
Description: This project will focus on multi-disciplinary research in the area of Cognitive Load (CL) modeling. It aims at constructing an interpretable/explainable model of CL for real-time prediction of task performance. It will allow human-centred designers in HCI and Education to develop, personalize and rapidly test their interfaces, instructional material and procedures in a way they are aligned to the limitation of the human mental capacity, maximising human performance. The novelty lies in the use of Deep Learning methods to automatically learn complex non-linear representations from EEG, moving beyond the knowledge-driven approaches that have produced hand-crafted deductive knowledge. A challenging task is to translate these representations into human-interpretable forms, a well-known issue in Explainable Artificial Intelligence. To tackle this, modern methods for automatic rules extraction from deep-learning models will be employed, with symbolic, argumentative reasoning methods, to bring these rules together in a highly accessible, explainable/interpretable model of CL.
Student Name: Yike Wang
Title: Machine Learning for Financial Asset Pricing
Supervision Team: Thomas Conlon, UCD / Pierpaolo Dondio, TU Dublin / John Cotter, UCD
Description: Asset pricing is concerned with understanding the drivers of asset prices, helping investors to better understand risks underpinning asset allocation. This research will employ machine learning (ML) techniques to uncover new links between economic fundamentals and asset prices, allowing the identification of mis-priced securities. ML-based techniques, such as dimensionality reduction, deep learning, regression trees and cluster analysis have helped uncover complex non-linear associations across multiple fields but remain relatively unexplored in the field of financial asset pricing. In this research, improved asset pricing precision will result from discerning between long-run fundamentals and short-run fluctuations. Economic intuition will be developed through the use of interpretable ML. The research has direct FinTech related applications, including to the fields of asset management, trading strategies and risk management.
Student Name: Liang Xu (submitted PhD, 2024)
Title: Game-Based Approaches and VR in CALL for Less Commonly Taught Languages
Supervision Team: Monica Ward, DCU / Elaine Uí Dhonnchadha, TCD
Description: Language learning is a complex task and involves a range of cognitive processes to be successful. Digital technologies are employed for commonly taught languages, but are less frequently used in the Less Commonly Taught Language (LCTLs) context. Intelligent Computer-Assisted Language Learning (CALL) systems can be useful for enhancing the effectiveness and efficiency of both teacher-led instruction and student learning. This research blends language learning pedagogy, sociocultural theory, and Digital Game-Based Language learning (DGBLL) in the form of games or VR. CALL materials are enhanced with the aid of Natural Language Processing techniques. In this study, we look into methods for encouraging the learning and teaching of LCTLs, particularly indigenous and endangered languages. We present a DGBLL system designed to promote language learning and student engagement. The system has been used successfully in primary school classrooms, with positive feedback from both students and teachers.
Student Name: Yinghan Xu
Title: Interactive Volumetric Video for Extended Reality (XR) Applications
Supervision Team: John Dingliana, TCD / Steven Davy, TU Dublin / Gareth W. Young, TCD
Description: In this project we investigate 3D graphics, vision and AI techniques to improve the use of volumetric video for interactive Extended Reality (XR) technologies. An advantage of volumetric video is that it facilitates personalised and photorealistic animations of subjects without need for editing by experienced animators. However, most current applications merely treat volumetric video as a linear sequence of frames with limited possibility for interaction, apart from rudimentary operations such as playback or rigid transformations. We will investigate extensions to volumetric video including: (a) flexible reuse such as retargeting, time-warping or seamlessly transitioning between independently recorded clips, whilst preserving the personalised and realistic appearance of the subject; (b) improving seamless integration in XR, avoiding unrealistic intersections with the real environment, and matching physical events in the volumetric video with viable interaction points in the real-world environment; (c) adaptation of volumetric video in real-time to integrate and improve shared XR experiences.