TCD | DCU | TUD | UCD | NUIG

D-REAL is funded by Science Foundation Ireland and by the contributions of industry partners.


Trinity College Dublin


Code: 2019TCD1
Title: Making Deep Learning Useful for Movie Post-Production
Supervision Team: Francois Pitié, TCD / Peter Corcoran, NUIG
Description: The machine learning revolution has had a profound impact on the field of computer vision, however, surprisingly, the impact of Deep Learning on the video processing pipeline of the movie and video production industry has been limited. The objective of this project is to make Deep Learning useful for visual media production by placing the user feedback at the core of the neural network architecture design, so as to help the artist get to 100% accuracy.


Code: 2019TCD2
Title: Adaptive Dialogue in Digital Assistants for Purposeful Conversations
Supervision Team: Vincent Wade, TCD / Benjamin Cowan, UCD
Description: Chatbots and Intelligent Assistants are becoming evermore ubiquitous, as natural language human-machine interfaces and are supporting a range of tasks – from information requests to commercial transactions. Although more challenging, there is growing interest in systems which can also interact in a social fashion, building a relationship with a user over time through natural seeming talk, while embedding practical tasks within this matrix of conversation. The project will investigate and implement techniques and technologies which will allow systems to seamlessly transition between topics (and the underlying domains), passing control of dialog between federated dialog managers, each trained on different domains.


Code: 2019TCD3
Title: Enhancing Clinicians ability to pull together trusted data at the right time
Funding: This PhD is sponsored by Sonas Innovation (http://www.sonasi.com/) and supported by the Irish Health Service Executive (HSE) Digital Academy
Supervision Team: Declan O’Sullivan, TCD / Marguerite Barry, UCD
Description: Clinicians and clinician scientists are increasingly faced with dealing with the challenge of integration of diverse data sources related to patients. These include sensitive personal clinical data, patient generated data (e.g. via an app), third party curated data (e.g. registry, biomarker data) and third party services data (e.g. geolocation, environment data etc.). Coupled to this, the quality of the data being integrated needs to be verified, and the source needs be verified as trustworthy on an ongoing basis. This PhD would explore how to engage the clinician/scientist in the data integration process, given the constant evolution of the data and data sources. It will advance the state of the art in the areas of data integration, personalisation and user engagement. This PhD will uniquely leverage the expertise of staff from the sponsors Sonas Innovation and supported by the Irish Health Service Executive (HSE) Digital Academy.


Code: Vacancy Filled
Title: VRFaces: Next-generation performance capture for digital humans in immersive VR
Supervision Team: Rachel McDonnell, TCD / Noel O’Connor, DCU
Description:It is expected that immersive conferencing in virtual reality will replace audio and video in the future. By using embodied avatars in VR to act as digital representations of meeting participants, this could revolutionize the way business is conducted in the future, allowing for much richer experiences incorporating the interpersonal communication that occurs through body language and social cues. However, creating virtual replicas of human beings is still one of the greatest challenges in the field of Computer Graphics. The aim of this project is to advance the animation technology considerably to allow a user to truly “become” their virtual character, feeling ownership of their virtual face, with near cinema quality facial animation.


Code: Vacancy Filled
Title: Modelling Purpose and Responsibility for Federated Governance of Data Sharing
Supervision Team: Dave Lewis, TCD / Rob Brennan, DCU
Description: Data sharing for AI training needs transparent governance and responsibilities. This research will develop semantic models for machine reasoning to help parties decide on data sharing agreements, e.g. for text, speech and video data to train medical chatbot agents. It will model data’s: personal information content; intended use; scope of processing and sharing; governance procedures; ownership rights; security protocols and quality assurance liabilities.


Code: Vacancy Filled
Title: Video Coding Artefact Suppression Using Perceptual Criteria
Supervision Team: Anil Kokaram, TCD / Noel O’Connor, DCU
Description: Video traffic accounts for about 70% of all internet traffic now, and predictions are on track for 80% by 2022. Data compression is the only reason that video has not broken the system. However lossy video compression causes artefacts e.g. blocking and contouring which have to be removed by the video player receiving the compressed data. All of the techniques for removing these artefacts currently do not exploit visual quality criteria relevant for humans. This causes a problem for video consumed on different devices. By exploiting the visibility of artefacts on different devices, this project develops new techniques for artefact reduction that are sensitive to the human visual system, hence enabling appropriate video quality/bitrate compromises to be made with different devices.


Code: 2019TCD7
Title: Incorporating patient-generated health data into clinical records
Supervision Team: Lucy Hederman, TCD / Damon Berry, TUD
Description: Patient-generated health data (PGHD), that is data originating from patients or their carers, not from clinicians, is a growing feature of chronic disease care. PGHD has the potential to impact health care delivery and clinical research. This PhD will focus on informatics aspects of these challenges, exploring how to allow for the incorporation of PGHD in EHRs and clinical systems, taking account of data interoperability issues, ensuring standardisation of the non-clinical data, and the appropriate representation of metadata about quality, governance, provenance. The research will be grounded in the Irish national health system and will seek to align with the national EHR project.


Code: Vacancy Filled
Title: Human Speech – How do I know it’s Real?
Supervision Team: Naomi Harte, TCD / Julie Berndsen, UCD
Description: How can you tell when speech is real, or when it is fake? This is the focus of this PhD project and it goes to the very core of the nature of human speech. Directly relating what is observable at a signal level in speech to how natural that signal is, as perceived by a human, is an unsolved problem in speech technology. This PhD addresses this gap in knowledge. The research will leverage a wealth of data from recent Blizzard speech synthesis challenges, where the naturalness of multiple world-class speech synthesis systems has been rated and made publicly available for researchers. Simultaneously, the research will also leverage shared datasets on spoofing from the automatic speaker verification community, such as those available through http://www.asvspoof.org/. The research is truly novel in that it goes beyond treating speech purely as a signal, and will bring the work to the level of investigating naturalness in continuous speech, over many seconds and sentences of generated speech.


Code: Vacancy Filled
Title: Psychological self-report with wearable technology
Supervision Team: Gavin Doherty, TCD / David Coyle, UCD
Description: There has been much interest recently in the design of technologies to support the delivery of mental healthcare, looking at many different aspects ranging from assessment and diagnosis, through intervention and long term self-monitoring. Asking people how they feel (self-report) is an important part of many mental health interventions. This ranges from short questions about how people are doing “in-the-moment” to formal psychological assessment. This PhD will investigate ways of improving engagement with psychological self report using wearables such as smartwatches, and will look at how intelligent notification systems might improve the response rate to prompts.


Code: Vacancy Filled
Title: Enhancing Visual and Physical Interactions in Augmented Reality
Supervision Team: John Dingliana, TCD / Cathy Ennis, TUD
Description: This project deals with advancing the state-of-the-art in the rendering and simulation of high fidelity animated virtual objects in augmented reality (AR) environments. In particular, we will develop novel techniques for improving the perceived realism of interactions between real-world objects and dynamic virtual elements in real-time. To address this problem, we will investigate the use of unified adaptive-level-of detail volumetric models that will serve as proxy geometry for both the real-world environment scanned by the AR system and the virtual objects generated and simulated by the animation system.


Code: 2019TCD11
Title: Inclusive Multimodal HMI to enhance the wellbeing of Older Drivers
Supervision Team: Samuel Cromie, TCD / Chiara Leva, TUD
Description: Transport will be transformed within the coming years through automated, connected and electric vehicles. Much research is focused on HMI for these vehicles – how transitions between driver and automation will be handled at different levels of automation, how driver situational awareness can be maintained, how interaction with vulnerable road users can be managed. This project will focus on the older driver. The research will adopt a user-centred design approach to build on our recent research within TCD which profiled older drivers, segmented them for research purposes and developed personae to inform design.


Code: 2019TCD13
Title: Deep Meta-Learning for Automated Algorithm-Selection in Information Retrieval
Supervision Team: Joeran Beel, TCD / Gareth Jones, DCU
Description: The Automated Machine Learning (AutoML) community has made great advances in automating the algorithm selection and configuration process in machine learning. However, the “algorithm-selection problem” exists in almost every discipline, be it natural language processing, information retrieval, or recommender systems. Our goal is to improve algorithm selection in information retrieval and related disciplines (e.g. NLP) through AutoML techniques such as meta-learning. The idea behind meta-learning is to use machine learning / deep learning to learn from large amounts of historic data how algorithms will perform in certain scenarios.


Dublin City University


Code: 2019DCU1
Title: Socio-Technical Governance Framework for Clinical Risk Data Analytics
Supervision Team: Rob Brennan, DCU / Siobhan Corrigan, TCD
Description: Unexplainable data analytics like deep learning are being rapidly promoted for deployment into healthcare risk management systems where there is already a chronic lack of effective governance for operational risks. For example, avoidable harm to patents costs up to 16% of total hospital expenditure according to the OECD and cost 194m in Ireland in 2009 (Rafter el al. 2016). The Scally Report on Cervical Check in Ireland is severely critical of poor governance of risk. Since any credible risk management system must include human factors as well as technical risks, in this project we will examine the impact of the widespread availability of medical data analytics on risk from a socio-technical perspective and design appropriate risk data analytics governance methods and tools that can cope with this disruptive technology.


Code: Vacancy Filled
Title: Next Generation Search Engines
Supervision Team: Cathal Gurrin, DCU / Séamus Lawless, TCD
Description: The current approach to web search is based on a decades old model of information retrieval in which the user converts an information need into a textual query and browses a result list that is minimally personalized by ranking algorithms operating over sparse personal data. Additionally, the current models are designed as closed-loop systems with the search provider having control of the user model and monetising user profiles without any involvement of, or value for, the user. With large volumes and new forms of personal data being gathered nowadays, there is a massive opportunity in this project to look beyond the current approach to web search and develop a Next Generation Search Engine that puts the user profile at the centre of the ranking algorithms, and moreover, allows the user to control how their personal profile data is used by the search engine.


Code: Vacancy Filled
Title: Energy-Oriented and Quality-Aware Network Path Selection to Support Differentiated Adaptive Multimedia Delivery in Heterogeneous Mobile Network Environments
Supervision Team: Gabriel Muntean, DCU / Marco Ruffini, TCD
Description: The existing various technology-based wireless networks provided by businesses, public institutions, etc. establish a perfect heterogeneous wireless network environment which can provide ubiquitous connectivity. Since mobile device owners are much of the time on the move and use them anywhere and anytime, energy efficiency is of paramount importance when delivering data to mobile devices in general, and in particular rich media content. Moreover, the users always prefer high quality content, which requires more energy to transmit and process. Consequently there is a need of a trade-off between quality and energy levels. This project will develop an energy-oriented mechanism to select suitable delivery paths for multimedia streaming services in heterogeneous wireless network environments in order to both save energy and maintain high levels of service quality.


Code: Vacancy Filled
Title: An Intelligent Diagnostic System for Classifying Dermatological Conditions using 3D Computer Vision
Supervision Team: Alistair Sutherland, DCU / Rozenn Dahyot, TCD
Description: Skin diseases are now increasing globally. A fast, accurate and low-cost system for diagnosis would be very beneficial, especially in developing countries. The accurate detection of skin lesions, inflammation and the different subtypes of diseases such as rosacea and seborrheic dermatitis is vital for early treatment and medication. In this research, a triple stage approach will be carried out, which focuses on 3D Computer Vision, Image Processing and Machine learning. The aim of this project is to identify skin disorders for subtypes of rosacea and other skin conditions by establishing an image-based Diagnosis System using 3D Computer Vision, Machine Learning and Artificial Intelligence. The system should be easily usable by both specialist clinicians and by general practitioners (GPs).


Code: Vacancy Filled
Title: Targeted Improvements for Technical Domain Machine Translation
Supervision Team: Andy Way, DCU / John Kelleher, TUD
Description: Neural MT (NMT) offers significant improvements in overall translation quality in recent years, but even the latest models struggle with accurately translating brand names and important technical terms. How can accurate translation be ensured for brand names and terms with known approved translations, even if the training data contains alternative translations? Can contextual clues be used to force the correct translation of ambiguous terms? This PHD will focus on exploring how improved term translation can be integrated within a general domain NMT model, to make targeted improvements to the overall translation quality. The main application area is MT for custom domains, such as information technology and software localisation.


Code: 2019DCU6
Title: MT system selection and recycling/fixing recycling candidates in a hybrid set-up
Supervision Team: Andy Way, DCU / John Kelleher, TUD
Description: Domain-tuned MT systems outperform general domain MT models, when they are used to translate in-domain data. It may not always be known in advance of translation time which domain is best suited to a particular text or sentence, and even for a known domain like software, some strings may be better translated by a general domain system. This gives rise to a number of research questions, including: Given multiple domain-tuned NMT systems, and translation candidates, how do we analyze an incoming string and determine which system will do the best translation at runtime? How do we best assess which translation candidate is the best choice?What are the best approaches for NMT? Also, if we have access to recycling (in a Translation Memory), when is a recycling match better than an MT candidate? Can NMT help fix high quality TM matches? Can a better translation candidate be found by combining elements of multiple translations, from recycling and MT systems? Can post-editing data be leveraged, e.g. a form of automatic post-editing approach?



Technological University Dublin


Code: 2019TUD1
Title: Linked geospatial data supporting Digitally Enhanced Realities
Supervision Team: Avril Behan, TUD / Declan O’Sullivan, TCD
Description:As with many other complex multi-property environments, navigation around healthcare campuses is a significant challenge for a variety of stakeholders both during everyday usage (by clients, visitors, healthcare professionals, facility managers, equipment and consumables suppliers, and external contractors) and during design for construction and redesign for renovation/retrofit. This project will progress the integration of the currently diverse and unconnected geospatial, BIM and other relevant data to deliver better return on investment for both operational and development budget holders, while also developing the research capabilities of graduates and the organisations with whom this project engages (for example: Ordnance Survey Ireland, HSE).


Code: 2019TUD2
Title: Cough monitoring through audio analysis
Supervision Team: David Dorran, TUD / Ivana Dusparic, TCD
Description:Cough sounds are important indicators of an individual’s health and are often used by medical practitioners to diagnose respiratory and related ailments [1-2]. Automatic detection and classification of cough sounds through the analysis of audio recordings would provide a low-cost, non-invasive approach to health monitoring for individuals and communities. The principal aim of this research is to develop robust and reliable cough detection algorithms that are applied to audio recordings obtained from microphones that are in fixed locations around homes and public spaces. Once detected the cough events will then be further analysed to identify features and patterns which can be used to inform on the health of both an individual and the wider community.


Code: 2019TUD3
Title: Model Compression for Deep Neural Networks
Supervision Team: John Kelleher, TUD / Rozenn Dahyot, TCD
Description:Deep learning has revolutionized digital media analysis, be it video video, image, text, or indeed multimedia. The deep learning revolution is driven by three long-term trends: Big Data, more powerful computers (GPUs/TPUs), and ever larger models. At the same time there has been an increase in edge computing and the deployment of deep neural networks to devices that have limited resources (such as memory, energy or bandwidth). This project will explore the development of novel cost functions, tailored to deep learning models for video and image analysis; compression techniques for deep neural networks for video and image analysis; and error analysis for model compression techniques.


Code: 2019TUD4
Title: Human-machine performance monitoring and prediction in Industry 4.0 applications (HU_MAP)
Supervision Team: Maria Chiara Leva, TUD / Samuel Cromie, TCD
Description:Human in the loop automation has had a massively beneficial impact on safety critical industries, with significant reductions in technological malfunctioning, leaving human error responsible for up to 80% of the accidents. However, to take full advantage of human machine collaboration, companies must understand how humans can most effectively augment machines, how machines can enhance what humans do best, and how to redesign business processes to support the partnership. The aim of this project is to design models that will demonstrate the potential of assessing safety-by-design in AID driven automation.


Code: 2019TUD5
Title: Computational Models of Human Mental Workload for Human-Computer Interaction
Supervision Team: Luca Longo, TUD / Carol O’Sullivan, TCD
Description:The project will focus on multi-disciplinary research in the area of Mental Workload (MWL) modelling. The pervasive uses of technologies in daily activities and working environments impose ever more mental workload upon operators and less physical load. This PhD will address the key research problem of understanding the shape of MWL, its core dimensions, their relationship and impact on human performance in ecological Human-Computer Interaction. Deep Learning modelling techniques will be employed.


Code: 2019TUD6
Title: Hierarchical Policy Estimation for Multi-Modal Content Delivery in Virtual Second Language Acquisition Tutorials
Supervision Team: Robert Ross, TUD / Julie Berndsen, UCD
Description:A key challenge in Second Language Acquisition is to give language learners meaningful opportunities to practice and learn while making mistakes in an accessible environment. Virtual characters have the potential to provide tuition through engagement in a one-to-one basis with the learner around experiences in the shared virtual environment. The computational modelling work of this PhD will focus on the application of state-of-the-art methods in hierarchical reinforcement learning methods to learn flexible models that control what the tutor says and how the tutor says it. This modelling will be backed by empirical work to acquire language training data from virtual and physical environments.


Code: 2019TUD7
Title: SoundGen: Creative and targeted sound mixing, using deep learning neural networks
Supervision Team: Susan McKeever, TUD / Andrew Hines, UCD
Description:The SoundGen project will deliver advanced state of the art techniques for advanced and effective sound generation and mixing. This work is inspired by recent developments in image neural style transfer networks for image mixing. Sounds can be represented as spectrogram images – which have proven effective as representations for sound when used with neural network classifiers. This project proposes to use spectrograms in combination with CNNs that have been trained on variety of sounds, to discover how specific feature maps of the CNN are associated with aspects of sound – similar as that of image neural style transfer networks.



University College Dublin


Code: Vacancy Filled
Title: Designing to support common ground building in multiparty VR dialogues
Supervision Team: Benjamin Cowan, UCD / Vincent Wade, TCD
Description:Interaction in Virtual Reality often revolves around inputs such as gesture, gaze and control using physical hardware. VR is also growing as a potential way to engage in multiparty communicative tasks (such as multiparty meetings) between other users, with the possibility of engaging with combination of virtual and avatar based agents. Yet there is little understanding to date about how the design of VR environments and avatars can be optimized for effective conversational interactions. The proposed PhD will push the boundaries of research in VR by exploring how the design of VR based experiences causally influence the development and maintenance of common ground and perceptions of shared understanding.


Code: Vacancy Filled
Title: Fostering digital social capital through design for wellbeing
Supervision Team: Marguerite Barry, UCD / Gavin Doherty, TCD
Description:Studies in eHealth have shown that communication technologies can support intervention and treatment through the exchange of ‘social capital’ (SC), a concept from social science with a range of individual and societal benefits. Although a strong association between SC and wellbeing has been identified, there is a lack of empirical data and consistent methods to foster social capital through design. This project begins by systematically exploring the position of social capital within HCI to identify core design challenges for eHealth. Then, using participatory design methods, it will prototype technologies and develop a novel design framework based on platform independent interactive features of digital applications. The project is suitable for a researcher interested in HCI, mental health and research through design.


Code: 2019UCD3
Title: Could you please repeat that? Deep learning non-native speech patterns
Supervision Team: Julie Berndsen, UCD / John Kelleher, TUD
Description:Voice interfaces have become pervasive in our daily lives with industry now looking to further transform user voice-experiences. The success of the big players has been due to the enormous amount of data they now have available which can be exploited by deep learning technologies. While speech recognition is often regarded to be a “solved problem”, non-native and accented speech (i.e. regional variation, dialect) continues to be problematic primarily due to the lack of data. This PhD project draws together natural language processing, speech technologies and AI applying deep learning (e.g. adversarial and transfer learning) to the identification of native/non-native speaker models for personalised interactive language education.


Code: 2019UCD4
Title: ASD Augmented: Influencing pedagogical perspectives and practices
Supervision Team: Eleni Mangina, UCD / Aljosa Smolic, TCD
Description:This project begins with the hypothesis that the emerging technology of Augmented Reality (AR) will influence the pedagogical perspectives and practices for students with ASD. Research studies indicate that students with autism choose majors in Science, Technology, Engineering and Maths (STEM) at higher rates than students in the general population. They are “looking for patterns, and in Science it is natural to look for patterns that reflect natural law”. The aim is to identify the impact of AR in concentration for students diagnosed with ASD.



National University of Ireland, Galway


Code: Vacancy Filled
Title: Using digitally-enhanced reality to reduce sedentary behaviour at home in an elderly population
Supervision Team: Jane Walsh, NUIG / Owen Conlan, TCD
Description: It is well known that regular physical activity (PA) limits the development and progression of chronic diseases and disabling conditions. However, time spent in sedentary behaviour (SB) has increased substantially over the last three decades and increases with age. The project will explore health behaviour change from a behavioural science perspective using the ‘person-based approach’ and will develop appropriate personalised behaviour change techniques (BCTs) integrated into VR systems to effectively reduce sedentary behaviour in older adults at home.


Code: Vacancy Filled
Title: Privacy and ethical value elicitation in smartphone-based data collection
Supervision Team: Mathieu D’Aquin, NUIG / Dave Lewis, TCD
Description: In some areas of health research, data collection is increasingly being carried out through the participants’ personal mobile devices. A key challenge however is dealing with privacy and ethics in a way which is meaningful to the participants. GDPR compliance is an obvious aspect, with much of it being handled by existing framework already (i.e. consent gathering, secure data handling, etc). There is however an increasing concern around the area of data ethics that the technical handling of GDPR is not sufficient to address. In this project, we want to investigate ways to introduce a continuous alignment between participants’ privacy preferences and ethical values in smartphone-based health-related studies. The case study we are looking into relates to Asthma and the effect of air quality on perceived Asthma symptoms.


Code: Vacancy Filled
Title: A Multi-User VR Therapy Space
Supervision Team: Attracta Brennan, NUIG / Gabriel-Miro Muntean, DCU
Description: A multi-user VR space can enable social interactions and experiences in a safe and moderated environment for those suffering from a range of mental, affective and behavioural disorders. Such a VR environment creates new opportunities to improve the quality of life of persons affected by such conditions. This proposal will take advantage of new developments in wireless VR to develop a set of tools and virtual environments that can be widely deployed. The VR space will be immersive, activity-based and facilitate multi-user interactions enabling the person to engage with a professional therapist, or their friends and family. A number of interactive scenarios are matched with the nature of a particular disorder and will be deployed and validated through user studies.


Code: Vacancy Filled
Title:A VR social connecting space for improved quality of life of persons with dementia
Supervision Team: Dympna Casey, NUIG / Marguerite Barry, UCD
Description: Reminiscence and music are two key strategies used to promote social connectedness and reduce loneliness. Listening to and sharing music and recalling events from the past connect the person with dementia again to the present enabling them to converse, interact and socialize. This research proposal will create a set of meaningful multi-user VR spaces for people with dementia focused on providing opportunities to reminisce and to engage in music-based activities. VR design skills will mix with user-centric design and public and patient involvement (PPI) to deliver an effective set of meaningful VR experiences that can be widely deployed to benefit persons living with dementia.


Code: 2019NUIG5
Title: Value-led design for personalized enhanced reality applications
Supervision Team: Heike Schmidt-Felzmann, NUIG / Marguerite Barry, UCD
Description: Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. VSD addresses design issues within the fields of information systems design and human-computer interaction by emphasising the ethical values relevant to users and other stakeholders. In this project the researcher will work with multiple research teams at NUIG to apply VSD to (i) a distributed eHealth app to monitor air quality for persons with respiratory conditions; (ii) the design a smart-speaker for the elderly. In the later stages of this project VSD may be applied to additional D-REAL projects.


Code: Vacancy Filled
Title: Advanced Facial Models for Rendering of Virtual Humans
Supervision Team: Michael Schukat, NUIG / Rachel McDonnell, TCD
Description: This PhD project will build on current state-of-art applying advanced facial generation techniques from deep learning, such as StyleGAN, to build photo-realistic ‘random’ 3D facial models that can be subsequently rendered to multiple 2D poses. A particular focus of this research will be on the photorealistic rendering of facial textures at NIR and LWIR frequencies so that these models can generate multi-wavelength representations. In addition this work will seek to go beyond existing facial datasets by rendering additional ground truth information from the 3D animations.