d-real is funded by Science Foundation Ireland and by the contributions of industry partners.
Applications for d-real positions starting in September/October 2021 are being accepted now. The deadline for applications is 16.00 on Thursday 18th March 2021. If there are open positions after this date we will operate a rolling call. If you have any questions about the programme please email the Programme Manager, Dr Stephen Carroll (email@example.com). To apply please visit our Apply to d-real page.
Dublin City University
Title: Establishing Design Principles to Combat Over-reliance on Day-to-Day Technologies and Cognitive Decline
Supervision Team: Hyowon Lee, DCU (Primary Supervisor) / Maria Chiara Leva, TU Dublin (External Secondary Supervisor)
Description: The concept of “usability” in the field of Human-Computer Interaction strives to make our technologies better fit to our tasks, by improved efficiency, ease of use and satisfaction. Most of the web services and apps we use today in our everyday lives try to save our mental efforts in recall (e.g. reminder app), vigilance (e.g. notification), arithmetic (e.g. calculator app), spatial cognition (e.g. GPS step-by-step instruction), etc. While the immediacy and convenience is one of the main reasons we use these in the first place, increasing anecdotal and scientific evidences suggest that an extended reliance to these tools can result in negative consequences in our cognition. Through a series of iterations of brainstorming, interaction sketching/design and usability testing, this project will construct a new set of usability principles and guidelines for designing new user-interfaces that minimise the potentially negative impact of over-reliance to day-to-day technologies.
Title: Computational argumentation theory for the support of deliberation processes in online deliberation platforms
Supervision Team: Jane Suiter, DCU (Primary Supervisor) / Luca Longo, TU Dublin (External Secondary Supervisor)
Description: This project focuses on argumentation, an emerging sub-topic of artificial intelligence aimed at formalising reasoning under uncertainty with conflictual pieces of knowledge. The project will deploy argumentation theory, a paradigm for implementing defeasible reasoning, with the key focus on conflict resolution. In recent years deliberation has emerged as a key component of democratic innovation enabling decision making and reason giving, helping the public make more enlightened judgements and act accordingly. This is an inter disciplinary project which will utilise this computational approach to examine real-world deliberative public engagement activities including key citizens’ assemblies. In detail, the aim of this research is to deploy formal argumentation in the modelling and analysis of deliberative discourses of people in deliberative events, for example, including discussion on the topics of climate change or immigration. Thus from a collection of conflictual points of views of humans, this will enable the automatic extraction of the most deliberative views on a topic. These can then be justified and explained to the public, in turn supporting understanding and decision-making.
Title: Maintaining Flow in a Virtual Reality / Augmented Reality Environment
Supervision Team: Jennifer McManis, DCU (Primary Supervisor) / Attracta Brennan, NUIG (External Secondary Supervisor)
Description: Virtual and Augmented reality (VR/AR) has redefined the interface between the digital and physical world, enabling innovative applications in the areas of entertainment, e-commerce, education, and training. Key to the attraction of VR/AR applications is the ability to provide an immersive experience, characterised by the concept of flow – the idea that the user can become “lost in the moment”. However, computer network constraints can interfere with data delivery and reduce the user’s Quality of Experience (QoE), interfering with their sense of flow. This project will focus on personalisation of VR/AR content delivery to maintain user QoE at a high level. A VR/AR Content Personalisation Algorithm will adapt content delivered according to user preferences and operational information about current network and device conditions. Key to this project’s success will be User Profiling and VR/AR QoE modelling, as well as a methodology to assess the impact of VR/AR on a user’s flow.
Title: Smart garments for immersive home rehabilitation using AR/VR
Supervision Team: Shirley Coyle, DCU (Primary Supervisor) / Carol O’Sullivan, TCD (External Secondary Supervisor)
Description: Smart garments provide a natural way of sensing physiological signals and body movements of the wearer. Such technology can provide a comfortable, user friendly interface for digital interaction. This project aims to use smart garments for gesture and movement detection to enhance user interaction within augmented and virtual reality systems. Applications for this research include home-based rehabilitation systems for recovery from stroke, traumatic brain injury or spinal cord injuries. The goal will be to improve adherence by making exercise programs more engaging for the user while at the same time gathering valuable information for therapists regarding the user’s performance and recovery over time. The novel combination of smart garments with AR/VR environments will promote a greater level of immersion into the virtual world beyond conventional controls and create a novel approach to human machine interfacing which is user focused.
Title: Enabling Scalable and Cost-Effective AR/VR User Experiences
Supervision Team: Paul Clarke, DCU (Primary Supervisor) / Anthony Ventresque, UCD (External Secondary Supervisor)
Description: Augmented Reality (AR) and Virtual Reality (VR) are key emerging technologies that hold great potential to transform content engagement. Central to the full realisation of this potential is the innovation of delivery mechanisms that support personalised experiences and content delivery, and in real time. This adaptive user experience is led by user engagement, but variable content may be delivered from central sources. It is this paradigm that is of central interest to this research project, and it can be achieved by a user-centric experience underpinned by a microservices system design. Microservices allow for software systems to be evolved in a highly flexible manner, they also permit streaming triggering opportunities that can potentially interrupt on-going streaming content with feedback loops that can manage handovers between different possible streaming pathways. This is a challenging task, but one that is key to unlocking adaptable and personalised AR/VR engagements.
National University of Ireland, Galway
Title: Driver behaviour: Merging psychology and technology to develop an early warning system for driver stress and fatigue
Supervision Team: Jane Walsh, NUIG (Primary Supervisor) / Alan Smeaton, DCU (External Secondary / Peter Corcoran, NUIG (Additional Supervisory Team Member)
Description: The proposed research will explore the physiological and psychological impact of driver stress and fatigue on performance to inform the development of safety interventions using a Driver Monitoring Systems (DMS). Several research studies have attempted to monitor psychological factors that impact on driving performance (e.g. fatigue, distraction), many of these have relied on physiological assessments using devices such as sensors, cameras, EEG, heart-rate monitors, galvanic skin response, eye-tracking and verbal cues etc. (e.g. Morales et al., 2017; Halim & Rehan, 2020). This research will develop, validate, and improve current driver monitoring system (DMS) research conducted using specially developed scenarios to invoke the target psychophysiological response. The research will validate sensors (e.g. NIR, Visible, Thermal, and Audio) synchronized with subject biometrics (e.g. heart rate, galvanic skin response, EEG signal Data) in realistic driving situations. These tests will be validated against psychological assessment (e.g. short-term memorability (Azcona et al. 2020), self-reported anxiety, anger, fatigue), physiological (EEG, GSR, HR), behavioural (e.g. driving performance, memory/recall) and observational measures (e.g. stress indicators). Building on these protocols, the research will also develop and test potential interventions with the aim of improving driver safety.
Title: Analysing User Generated Multimodal Content and User Engagement in an Online Social Media Domain
Supervision Team: Josephine Griffith, NUIG (Primary Supervisor) / Susan McKeever, TU Dublin (External Secondary Supervisor)
Description: In the context of online social media, much of the research work carried out to date uses the text from user posts and the social network structure. However, the trend in many social media platforms is a move from text to emojis, images, and videos, many of which are “memes” containing images superimposed with text. In this project we wish to analyse multimodal social media data in an entertainment domain. The aims of the project are 1) to analyse trends across different modalities of user generated content, with respect to features such as social media engagement, topics, higher-level concepts of the content and user emotions and engagement and 2) to find how these features correlate with viewing figures. The analysis will be carried out using machine learning and deep learning techniques, in tandem with language models for text representation and interpretation and topic modelling techniques.
Title: Virtual Reality for Robust Deep Learning in the Real World
Supervision Team: Michael Madden, NUIG (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor)
Description: There have been notable successes in Deep Learning, but the requirement to have large, annotated datasets creates bottlenecks. Datasets must be carefully compiled and annotated with ground-truth labels. One emerging solution is to use 3D modelling and game engines such as Blender or Unreal to create realistic virtual environments. Virtual cameras placed in such environments can generate images or movies, and since the locations of all objects in the environment are known, we can computationally generate fully accurate annotations. Drawing on the separate and complementary fields of experience of the two supervisors, the PhD student will gain a synergy of expertise in Graphics Perception and Deep Learning. This PhD research will investigate questions including: (1) strategies to combine real-world and virtual images; (2) the importance of realism in virtual images; (3) how virtual images covering edge cases and rare events can increase the reliability, robustness and trustworthiness of deep learning.
Title: AR/VR pose tracking for task-oriented physical therapy training for the treatment of osteoporosis
Supervision Team: Attracta Brennan, NUIG (Primary Supervisor) / John Dingliana, TCD (External Secondary Supervisor) / John Carey, NUIG and Mary Dempsey, NUIG (Additional Supervisory Team Members)
Description: AR/VR based therapy is successful in rehabilitation whilst decreasing recovery time and cost. Meanwhile, the incorporation of gaming into AR/VR rehabilitation demonstrates greater patient motivation and adherence to therapy treatments. Despite the fact that osteoporosis is one of the greatest societal and economic health challenges today, there is a lack of research on AR/VR task-oriented physical therapy training for people with osteoporosis. In this project, the student will investigate the use of AR/VR pose tracking for task-oriented physical therapy training for the treatment of osteoporosis. The student will design and build full body tracking AR/VR games fusing off-the shelf commodity hardware and IMU-based sensors. These games will target the areas most vulnerable to osteoporosis-fracture (i.e. hip, spine, shoulder and arms); explore the effectiveness of different games and exercises according to a therapeutic configuration on a person with osteoporosis.
Trinity College Dublin
Title: Interrogating knowledge graphs from a non-computer scientist’s perspective
Supervision Team: Declan O’Sullivan, TCD (Primary Supervisor) / Marguerite Barry, UCD (External Secondary Supervisor) / Fergal Marrinan, SONAS Innovation (Additional Supervisory Team Member)
Description: Knowledge Graphs (KGs) have been successfully adopted in many domains, both in academia and enterprise settings–enabling one to integrate heterogeneous data sources to facilitate research, business analytics, fraud detection, and so on. The creation and consumption of these KGs often rely on computer science practitioners. The uptake of KG technologies is hampered in both enterprises and academic/cultural institutions alike, by the lack of domain or task-specific tooling for subject matter experts. The PhD will thus identify, propose, and develop domain-specific requirements for creating, engaging with, and interrogating knowledge graphs from a non-computer scientist’s perspective. The PhD will collaborate closely with researchers in the FAIRVASC project (www.fairvasc.eu) and the Beyond 2022 project (www.beyond2022.ie) both of whom have already developed KGs to support their research but through the mediation of computer scientists. In parallel it is planned through SONAS Innovation, to apply the proposed solution to a use case identified in an enterprise domain. This PhD will be aligned with the sponsorship by SONAS Innovation (http://sonasi.com) of d-real PhDs, and will also benefit from research ongoing within the SFI ADAPT Research Centre at TCD.
Title: Simulation and Perception of Physically-based Collisions in Mixed Reality Applications
Supervision Team: Carol O’Sullivan, TCD (Primary Supervisor) / Brendan Rooney, UCD (External Secondary Supervisor)
Description: In this project, we will research new methods for simulating collisions and contacts between objects that move according to the laws of physics, when one of those objects is real and the other one virtual (e.g., virtual ball bouncing on a real table, or vice versa). We will also explore the factors that affect the perception of physically based interactions between real and virtual objects (both rigid and deformable). Multisensory (i.e., vision, sound, touch) simulation models of physically plausible interactions will be developed, using captured data and machine learning (ML) and driven by our new perceptual metrics. Our real-time simulations will be applied in Mixed Reality (MR) environments, which will be displayed using a variety of technologies, including the MS Hololens, projection-based MR and hand-held devices.
Title: Procedural Generation of Narrative Puzzles
Supervision Team: Mads Haahr, TCD (Primary Supervisor) / Marguerite Barry, UCD (External Secondary Supervisor)
Description: Narrative puzzles are puzzles that form part of the progression of a narrative, whose solutions involve exploration and logical as well as creative thinking. They are key components of adventure and story-driven games, and often feature in large open-world games. However, filling large open worlds with engaging content is challenging, especially for games with procedurally generated worlds, such as Minecraft (2011) and No Man’s Sky (2016). Systems exist for generating narrative puzzles procedurally, but they lack context about many narrative elements, such as character motivation, plot progression, dramatic arc, as well as player modelling. This project will improve procedurally generation of narratives for small-scale narrative games as well as large-scale open world games by integrating new types of narrative elements as well as player modelling into the Story Puzzle Heuristics for Interactive Narrative eXperiences (SPHINX) framework, potentially resulting in dynamically generated narratives of increased sophistication and significantly improved player experience.
Title: Appearance Transfer for Real Objects in Mixed Reality
Supervision Team: John Dingliana, TCD (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor)
Description: Research in mixed reality is largely concerned with rendering virtual objects so that they appear plausibly integrated within a real environment. This project investigates the complementary problem of modifying the appearance of the real environment as viewed through a mixed reality display. For instance, a physical wall might be virtually removed so that objects can be seen through or embedded within it. However, merely removing existing surfaces may appear implausible, instead simulated geometry of a hole could be inserted to create the appearance of sections being cut away, or the object re-rendered as refractive glass so that we see through the surface but retain an understanding of the original geometry. The problem is particularly challenging in the context of modern Optical See-Through (OST) MR displays, such as Microsoft’s Hololens, where the real environment is seen directly through a transmissive screen, limiting the degree to which we can change its appearance.
Title: Making Deep Learning Useful for Movie Post-Production
Supervision Team: François Pitié, TCD (Primary Supervisor) / Peter Corcoran, NUIG (External Secondary Supervisor)
Description: The machine learning revolution has had a profound impact on the field of computer vision, however, surprisingly, the impact of Deep Learning on the video processing pipeline of the movie and video production industry has been limited. The objective of this project is to make Deep Learning useful for visual media production by placing the user feedback at the core of the neural network architecture design, so as to help the artist get to 100% accuracy.
Title: Mobile apps for advanced language learning (speaking/listening)
Supervision Team: Elaine Uí Dhonnchadha, TCD (Primary Supervisor) / Andrew Hines, UCD (External Secondary Supervisor)
Description: Speaking and listening in a new language are critical when moving country, especially for those moving for a job or to study in a university where programmes are not delivered in their native language. Opportunities to practice language skills with relevant content, accents and dialects for proficiency in an academic subject with specialised/technical vocabulary are usually limited prior to arriving in the destination country. What if there was a convenient, personalised mobile application that was tailored to an individual to provide targeted learning support based on their language and vocabulary preferences? This project will develop methods to generate user-specific listening materials and computer based systems that can measure and provide feedback on mispronunciation. This project will give the student an opportunity to learn about linguistics, speech signal processing, Natural Language Processing (NLP), and state of the art machine learning. Recordings, as well as NLP and text-to-speech synthesis with state of the art voice conversion (VC) will be used to generate content that is relevant to the language learners’ area of study in order to generate listening/reading exercises. Deep Neural Networks using transfer learning will be applied to mispronunciation detection. Computer aided pronunciation teaching (CAPT) will extend AI techniques in the domains of automatic speech recognition (ASR) and speech quality assessment to evaluate the learner’s speech and provide personalised feedback.
Title: Making Chatbots more personalised: Personalisation for AI driven Conversational Digital Assistants
Supervision Team: Vincent Wade, TCD (Primary Supervisor) / Robert Ross, TU Dublin (External Secondary Supervisor)
Description: The explosion in the use of conversational digital assistants* brings a unique opportunity to empower users in their work and leisure life. Current state of the art deep learning approaches to conversational digital assistants typically focus on single-utterance-at-a-time dialogue management and response selection. However, next generation digital assistants seek to embed deeper personalization in the digital assistant to enhance effectiveness, efficiency and user satisfaction in the dialog interaction. This PhD research will focus on personalization in conversational digital assistants to improve effectiveness, efficiency and/or satisfaction of the user. It will research new technologies for Personalisation in conversational interaction (e.g. personalizing the actual dialogue sequencing & dialogue responses) and new techniques for adapting or embedding user models into such neural driven dialog systems. *Also known as virtual assistants or dialogue systems. The application of AI in such systems is generically referred to as Conversational AI.
Title: Beyond CBT: innovative exploration of digital support for psychological therapies
Supervision Team: Gavin Doherty, TCD (Primary Supervisor) / David Coyle, UCD (External Secondary Supervisor) / Corina Sas, Lancaster University (Additional Supervisory Team Member)
Description: There has been a huge rise in the research and deployment of digital therapies in the area of mental health and mental wellbeing. Within existing research, there has been an understandable focus on digital support for Cognitive Behavioural Therapy (CBT), due to the strong evidence base, it being widely used, and relatively structured, and hence amenable to digital support and delivery. However, CBT will not be suitable or effective for everyone, and there are many other evidence-based therapies available. Alternatives inspired from third wave therapeutic approaches such as compassion therapy, emotion-focused, or mindfulness-based therapy, go beyond the cognitive and behavioural aspects addressed in CBT, to support key emotional and bodily symptoms linked to emotional wellbeing and mental health. However, there has been limited work exploring how these newer types of interventions might be delivered or supported with digital technologies. This PhD project will focus on exploring alternative evidence-based therapies and how these could be digitally supported.
Title: Multi-Lingual, Lip-Synch for photorealistic virtual humans with emotion
Supervision Team: Rachel McDonnell, TCD (Primary Supervisor) / Peter Corcoran, NUIG (External Secondary Supervisor)
Description: This project will improve the naturalness of speech synthesis for photorealistic digital humans, by predicting visual speech features, including emotion, from a linguistic input, using a combination of advanced computer graphics and deep learning methods. A large database of training data will be created using photorealistic virtual humans and used to train the generative adversarial networks (GAN). The focus will be on high quality multi-lingual lip animations with emotion that will lead to better user experiences in a wide range of applications such as computer games, movie subtitles, and intelligent assistants, etc. The researcher on this project will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner.
Technological University Dublin
Title: A novel framework for neuroscience and EEG event-related potentials with virtual reality and deep autoencoders
Supervision Team: Luca Longo, TU Dublin (Primary Supervisor) / Rachel McDonnell, TCD (External Secondary Supervisor)
Description: Electroencephalography (EEG) and its applications within Human-Computer Interaction and for brain-computer interface development (BCI) is increasing. Within neuroscience, there is an increasing emphasis on the analysis of event related potentials (ERP) for brain functioning in ecological contexts rather than exclusively in lab-constrained environments. This is because humans might not behave naturally in controlled settings, thus influencing the reliability of findings. However, EEG studies performed in natural/ecological settings are more problematic than in controlled settings, because the control of EEG equipment by researchers is inferior. For these reasons, a new trend is devoted to the application of Virtual Reality (VR) in the context of ERP research, for the development of ecological virtual environments similar to real ones. The advantage is that a traditional ERP study can still be performed in supervised settings, but giving the researcher a full control over experimental factors and EEG equipment. This PhD will produce a novel framework that will allow scholars to perform ERP-based research in ecological settings, by employing VR, and by constructing autoencoders, fully unsupervised deep-learning methods for automatic EEG artefact reduction, by taking advantage of not only temporal dynamics of EEG, but also their spatial and frequency-domain characteristics.
Title: Inclusive Maths: Designing Intelligent and Adaptable Educational Games to Reduce Maths Anxiety in Primary Schools
Supervision Team: Pierpaolo Dondio, TU Dublin (Primary Supervisor) / Attracta Brennan, NUI Galway (External Secondary Supervisor)
Description: Maths Anxiety is a condition affecting one out of six students worldwide. Despite the fact that digital games have been widely used to support children’s mathematical skills, results regarding their effect on Maths Anxiety are inconclusive. Potential explanations are the scarcity of Maths-related games able to adapt to the learner, and lack of games explicitly designed to deal with Maths Anxiety. Inclusive Maths seeks to investigate if the introduction of adaptation and anxiety-aware features in digital games for Primary School can improve students’ performance and reduce their Maths anxiety. Our hypothesis is that by adding adaptation to Maths games, anxious students will feel more confident in playing the game. By introducing anxiety-aware features such as the emphasis on the storytelling elements of the game, individual rewards system, interactive and collaborative game-modes players will feel more engaged. The project will evaluate the games developed during three cycles of experimentations in 30 participating schools.
Title: Exchanging personal health data with electronic health records: A standardized information model for patient generated health data
Supervision Team: Dympna O’Sullivan, TU Dublin (Primary Supervisor) / Lucy Hederman, TCD (External Secondary Supervisor)
Description: As healthcare technologies evolve, the management of health data is no longer only clinician-governed but also patient-controlled. Engagement with consumer health IT has been augmented by Internet of Things-based sensing and mobile health apps. Until recently, Electronic Health Records were seen as the main vehicle to drive healthcare systems forward, however a vital role is increasingly played by patients in controlling their own health information and self-managing their diseases. The objective of this research is the development of a novel middleware information model to facilitate better interoperability and exchange of patient generated health data between patients and providers. Key research challenges include the development of a conceptual architecture of an interoperable solution, semantic representation to enable data to be mapped to standardized biomedical vocabularies such as SNOMED-CT, syntactic representation to conform to healthcare standards such as HL7 FHIR and privacy and security requirements for transferring and storing personal health data.
Title: Controllable Consistent Timbre Synthesis
Supervision Team: Seán O’Leary, TU Dublin (Primary Supervisor) / Naomi Harte, TCD (External Secondary Supervisor)
Description: The goal of the research will be to provide control over the design of consistent musical instruments. Until recently sound synthesis has been dominated by two approaches – physical modelling and signal modelling. Physical models specify a source. Once the source is specified the family of sounds coming from the source can be synthesised. Signal models, on the other hand, specify waveforms and so are very general. The major downside of signal models is that many parameters are required to specify a single sound. The goal of this project is to use machine learning algorithms to synthesise the parameters for a family of sounds related to a single source. This project will marry machine learning and signal processing techniques, including research into the use of generative algorithms, signal models and sound representations.
University College Dublin
Title: Customising AI for digital curation work that utilises controlled vocabularies
Supervision Team: Amber Cushing, UCD (Primary Supervisor) / Suzanne Little, DCU (External Secondary Supervisor)
Description: Digital curators are the “frontline” practitioners who work to appraise, select, ingest, apply preservation actions, maintain, and then, provide access and use to digital heritage objects for all types of users, from digital humanities scholars to tourists. This work has the potential to benefit from AI technology, particularly computer vision. Ethical issues surround the use of controlled vocabulary classification systems that digital curators utilise to arrange and describe digitised historical photograph collections in heritage institutions. If these ethical concerns are not addressed uptake of AI technology in this sector may be slow or limited. The project will explore the ethical and social context of digital curation work to inform customisation of an AI model for use in the sector. The project will utilise Microsoft Azure Cognitive Services to customise and refine a computer vision (CV) model for use with the Library of Congress subject heading classification system. This position is based at the UCD School of Information & Communication Studies.
Title: Nudging Humans To Have Ethically-Aligned Conversations
Supervision Team: Vivek Nallur, UCD (Primary Supervisor) / Vincent Wade, TCD (External Secondary Supervisor)
Description: Conversational AI (or chatbots) exists in obvious conversation-enabled devices such as Alexa, Google Assistant, or Siri, but also in devices such as smartphones, smartwatches, FitBit, etc. Given that we spend almost all our entire waking hours in close or constant contact with a smart device, it is trivial for the device to nudge our attention to news/views/ decision-options that it considers important. Nudges are behavioural interventions that arise primarily from human decision-making frailties (e.g., loss-aversion, inertia, conformity) and opportunity seeking. However, not all humans are affected by the same biases, i.e., a nudge that works on one person may not work on another. This project investigates the possibility of a conversational agent nudging the human to use/employ ethically grounded language in conversations and texts. The conversational agent will attempt to deliver nudges in an adaptive manner, with the objective of making the human more ethically aware.
Title: Multimodal data wrangling for Real-time IoT-based Urban Emergency Response Systems
Supervision Team: Andrew Hines, UCD (Primary Supervisor) / Rob Brennan, DCU (External Secondary Supervisor) / Fatemeh Golpayegani, UCD (Additional Supervisory Team Member)
Description: Emergency Response Systems (ERS) enable the rapid location of emergencies and deployment of resources by emergency response teams. Historically this has been as a result of an emergency call from a person at the scene. Technology advancements in urban areas and so-called smart cities mean that Internet of Things-enabled infrastructure can offer a “single strike” data dump of multimodal information via the ERS. For example: in a vehicle collision, information regarding the crash severity, number of passengers, fuel type, etc. can be gathered from in-place cross-platform sensors including vehicles or smartphones’ audio, and accelerometer sensors, traffic cameras, etc. This information may be valuable to fire crews, ER staff and other members of the response team. The technical challenges to be addressed by this project will focus on audio and video processing, data collection and curation and applying data-driven learning (e.g. deep learning and knowledge graphs) to cross-platform knowledge models. The student will identify and prioritise data sources, build a framework to integrate and generalize multi-modal data, and demonstrate how multiple platforms can assist in real-time ERS decision making.
Title: Annotated lip reading for Augmented Educational Systems
Supervision Team: Eleni Mangina, UCD (Primary Supervisor) / Sam Redfern, NUIG (External Secondary Supervisor)
Description: This project begins with the hypothesis that emerging technologies (Augmented and Virtual Reality – AR/VR) will influence the pedagogical perspectives and practices for students with literacy problems. A review of the literature has shown that the lip-reading system remains a challenging topic due to the lack of an annotated dataset that is related to this domain. For example, with automatic speech recognition and speech analysis, there is a need a comprehensive dataset, which is expensive as well as time-consuming to gather. This project considers embedding a 3D avatar within an augmented educational system, with the capacity of semi-supervised learning, to generate and manipulate fabricated data from the real input data. The aim is to identify how AR or VR (as an alternative) can assist with the collection of data through an immersive environment.
Title: Data Aware Design: A Human-Data Interaction Framework to Promote User Agency
Supervision Team: Marguerite Barry, UCD (Primary Supervisor) / Declan O’Sullivan, TCD (External Secondary Supervisor)
Description: Many everyday digital interactions involve uses of personal data that are passively processed for digital content, health tracking, social media, voice assistants, etc. The growing dependence of digital applications on ML and data has created an urgent need to support more data aware design in human-computer interaction (HCI). This PhD project will investigate how application designers understand and design for passive data use across different services. It will explore human-data interaction (HDI) and human-computer interaction (HCI) methods for transforming, visualizing and mapping data interactions where people can understand, track and potentially negotiate use of their data. The aim is to develop participatory design processes and a framework for ‘data aware’ design.
*Please note that funding for some projects is contingent on supervisors being approved by Science Foundation Ireland