DCU | NUIG | TCD | TU Dublin | UCD

D-REAL is funded by Science Foundation Ireland and by the contributions of industry partners.

Applications for D-REAL positions starting in September 2020 are being accepted. We are now operating a rolling call. If there are open positions available you can apply for them. If you have any questions about the programme please email stephen.carroll@d-real.ie.


Dublin City University

Code: 2020DCU01 (position filled)
Title: Prosthetics, Virtual and Augmented Reality, and Psychosocial Outcomes
Supervision Team: Pamela Gallagher, DCU (Primary Supervisor) / Brendan Rooney, UCD (External Secondary Supervisor)
Description: To date in the field of rehabilitation, the application of virtual and augmented reality predominantly focuses on improving physical and clinical outcomes. There is enormous scope to learn about whether, if and how VR or AR environments impact on outcomes for people who use assistive technology (e.g. wheelchairs, prosthetics) and to explore its broader psychosocial impact. This research will capture the inter-relations between self-conceptions, identity, assistive technology integration and VR/AR environments to optimise well-being, proficiency and personally meaningful outcomes. This project will explore (1) whether and what type of VR or AR environment can be used to develop a greater integration of the person/technology to not only improve physical functioning but also psychological wellbeing; (2) what psychological outcomes are impacted (e.g. how does it shape a person’s sense of self and identity?); and (3) what type and quality of VR or AR environment would have the greatest impact on these outcomes?


Code: 2020DCU03 (position filled)
Title: Conversational Search of Image and Video with Augmented Labeling
Supervision Team: Gareth Jones, DCU (Primary Supervisor) / Benjamin Cowan, UCD (External Secondary Supervisor)
Description: The growth of media archives (including text, speech, video and audio) has led to significant interest in the development of search methods for multimedia content. A significant and rapidly expanding new area of search technology research in recent years has been conversational search (CS). In CS users engage in a dialogue with an agent which supports their search activities, with the objective of enabling them to find useful content more easily, quickly and reliably. To date, CS research has focused on text archives; this project is the first to explore CS methods for multimedia archives. An important challenge within multimedia search is formation of queries to identify relevant content. This project will seek to address this challenge by exploring the use of technologies from augmented reality to dynamically label images and video displayed within the search process, to assist users in forming more effective queries using a dialogue-based search framework.


Code: 2020DCU04 (position filled)
Title: Understanding the Experience of Interactive Machine Translation
Supervision Team: Sharon O’Brien, DCU (Primary Supervisor) / Benjamin Cowan, UCD (External Secondary Supervisor)
Description: An increasing amount of information is being translated using Machine Translation. Despite major improvements in quality, intervention is still required by a human if quality is to be trusted. This intervention takes the form of “post-editing”, i.e. identifying and quickly fixing MT errors. This tends to be done in a serial manner, that is, the source text is sent to the MT engine, an MT suggestion appears, and the editor assesses it and fixes it, if necessary (termed “traditional” post-editing). Recently, a new modality has been developed called Interactive Machine Translation (IMT), involving real-time adaptation of the MT suggestion as the editor makes choices and implements edits. Very little is understood about this new form of human-machine interaction. This PhD will fill this knowledge gap by researching the cognitive effort involved in this type of translation production; how perceptions of system competence and trust influences decision-making; and how these evolve over time and experience.


Code: 2020DCU05 (position filled)
Title: Paiste Corcra: Finding the purple patches for CALL in LILAC (the Learner Language Corpus)
Supervision Team: Monica Ward, DCU (Primary Supervisor) / Elaine Uí Dhonnchadha, TCD (External Secondary Supervisor)
Description: Language learning is a complex task and involves a range of cognitive processes to be successful. In many countries, the teaching and learning of Less Commonly Taught Languages makes minimal use of digital technologies. There is a need to provide intelligent Digital Content to help teachers teach the subject and to help students learn it more efficiently and effectively. Learner corpora (electronic collections of language learner data) have been shown to be beneficial for language learning. Error-tagged learner corpora, where all the errors in a corpus have been annotated with standardised error tags are particularly useful but there is little data available for learners studying Less Commonly Taught Languages. This project will design and develop a Learner Language Corpus (LILAC) and based on adaptive (machine) learning. The LILAC corpus will be analysed to extract detailed error statistics and to carry out analyses of specific error types.



National University of Ireland, Galway

Code: 2020NUIG01 (position filled)
Title: Value-led design for personalized enhanced reality applications
Supervision Team: Heike Felzmann, NUIG (Primary Supervisor) / Marguerite Barry, UCD (External Secondary Supervisor)
Description: This project will investigate the ethical, legal and HCI aspects associated with the personalisation of enhanced reality and virtual reality applications, with the aim to identify relevant concerns and develop potential solutions for problematic uses of those technologies. The project will draw on use cases from NUIG groups in the field of eHealth and smart city applications from a Value-Sensitive Design perspective. It aims to identify relevant value-related concerns for specific applications and explore the potential generalizability to other application domains in the field of enhanced and virtual reality.


Code: 2020NUIG02 (position filled)
Title: A Multi-User VR Recreational Space for People with Dementia
Supervision Team: Sam Redfern, NUIG (Primary Supervisor) / Gabriel-Miro Muntean, DCU (External Secondary Supervisor)
Description: Dementia is one of the greatest societal and economic health challenges of the 21st century, and a number of research initiatives have proven the usefulness of VR as a therapy tool. Although removing social isolation and supporting re-connection with friends and family are central to improving outcomes for people with dementia, networked VR-based therapy technologies with an emphasis on social activity have not previously been studied. This project will create a multi-user VR space where socialization and social performance are supported. The VR space will be immersive, activity-based and facilitate multi-user interactions enabling the person to engage with a professional therapist, or their friends and family, without the logistical difficulties of physical travel. A number of interactive scenarios will be deployed and validated through user studies. Supervision is by a cross-disciplinary team of computer scientists and nurses.


Code: 2020NUIG03 (position filled)
Title: Using digitally-enhanced reality to reduce obesity-related stigma
Supervision Team: Jane Walsh, NUIG (Primary Supervisor) / Owen Conlan, TCD (External Secondary Supervisor)
Description: Weight-related stigma is well established as a pervasive feature of societies and predicts higher risk of depression, anxiety, and suicidality, as well as greater risk of inflammation and chronic disease. Medical professionals consistently display high levels of anti-obesity bias, assume obesity suggests patient non-compliance, and admit they would prefer to avoid dealing with obese patients at all. A huge industry now exists around overcoming obesity and supporting weight management. However, much of the research suggests that reducing stigma will have a significantly greater impact on rates of obesity. The present study proposes to develop, deliver and evaluate an evidence-based VR intervention to foster empathy and reduce obesity-related stigma in target groups (e.g. medical students). This will be achieved by synergising current psychological research on empathy and stigma with state-of-the-art VR technologies. Intervention content will be developed using the ‘person-centred approach’ and outcomes assessed will include both psychological and behavioural indicators of success.


Code: 2020NUIG04 (position filled)
Title: Multi-Lingual Lip-Synch – Text to Speech & Audio Representation
Supervision Team: Peter Corcoran, NUIG (Primary Supervisor) / Rachel McDonnell, TCD (External Secondary Supervisor)
Description: This project will apply some of the latest deep learning techniques to build specialised datasets and train advanced AI neural network models to deliver a real-time multi-lingual lip-synch for speakers in a video. This project will focus on conversion of text subtitles into an intermediate speech representation suitable across multiple languages (e.g. phonemes). The preparation and automated annotation of specialised datasets provides an opportunity for high-impact research contributions from this project. The researcher on this project will collaborate with a 2nd PhD student (2020TCD07) who will focus on photo-realistic lip-synching of the speech data. Both PhDs will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner. The end goal is a practical production pipeline, inspired by Obamanet, for multi-lingual over-dubbing of video content from multi-lingual subtitles.



Trinity College Dublin

Code: 2020TCD01 (position filled)
Title: Deep Meta-Learning for Automated Algorithm-Selection in Information Retrieval & Machine Learning
Supervision Team: Joeran Beel, TCD (Primary Supervisor) / Gareth Jones, DCU (External Secondary Supervisor)
Description: The Automated Machine Learning (AutoML) community has made great advances in automating the algorithm selection and configuration process in machine learning. However, the “algorithm-selection problem” exists in almost every discipline, be it natural language processing, information retrieval, or recommender systems. Our goal is to improve algorithm selection in information retrieval and related disciplines (e.g. NLP) through AutoML techniques such as meta-learning. The idea behind meta-learning is to use machine learning / deep learning to learn from large amounts of historic data how algorithms will perform in certain scenarios.


Code: 2020TCD02 (position filled)
Title: Driver sensing & inclusive adaptive automation for older drivers
Supervision Team: Sam Cromie, TCD (Primary Supervisor) / Chiara Leva, TU Dublin (External Secondary Supervisor)
Description: The proportion of over 65s in the population is growing; by 2030 a quarter of all drivers will be older than 65. At the same time transport is being transformed with connected, automated and electric vehicles. This project will take a user-centred design approach to understanding the needs of older drivers exploring how these could be addressed through driver sensing and adaptive automation. Progress beyond the state of the art will include a technology strategy for inclusive personalized multimodal Human Machine Interface (HMI) for older drivers and an inclusive standard for driver sensing and/or HMI for connected and autonomous vehicles.


Code: 2020TCD03 (position filled)
Title: Using Machine Learning to identify the critical features of carotid artery plaque vulnerability from Ultrasound images
Supervision Team: Caitríona Lally, TCD (Primary Supervisor) / Catherine Mooney, UCD (External Secondary Supervisor)
Description: Over one million people in Europe have severe carotid artery stenosis, which may rupture causing stroke, the leading cause of disability and the third leading cause of death in the Western World. This project aims to develop a validated means of assessing the predisposition of a specific plaque to rupture using Ultrasound (US) imaging. Using machine learning (ML) techniques, we will look at multiple US modalities concomitantly; including B-mode images, elastography and US imaging with novel contrast agents for identifying neovascularisation. Combining this with in silico modelling of the vessels will provide us with a unique capability to verify the clinical and US findings by looking at the loading on the plaques and therefore the potential risk of plaque rupture. Proof of the diagnostic capabilities of ML and non-invasive, non-ionising US imaging in vivo for the diagnosis of vulnerable carotid artery disease would be a ground-breaking advancement in early vascular disease diagnosis.


Code: 2020TCD04 (position filled)
Title: Delivery of high-performance multi-media applications through dynamic coordination of networking, transcoding and user device technology
Supervision Team: Marco Ruffini, TCD (Primary Supervisor) / Gabriel-Miro Muntean, DCU (External Secondary Supervisor)
Description: 5G promises to deliver ubiquitous high capacity, ultra-reliable services, so that bandwidth and latency dependent applications, such as Augmented and Virtual Reality can run seamlessly in any location. However, such networks will be built over time and the final 5G target performance of ultra-high capacity, low latency and massive densification will be achieved over many years, culminating in the network generation beyond 5G (today already referred to as 6G, which is hypothesized to launch commercially in 2030). This project aims at investigating efficient resource management mechanisms, combined across applications and network resources to achieve deterministic performance across a shared infrastructure of a developing network. Our approach will consider: multiple radio access technologies and different density of access points; multiple scenarios for edge node deployment; use of AI algorithms to coordinate the control plane across multiple network segments. The work will be carried out on the new experimental beyond 5G testbed platform deployed across the city of Dublin.


Code: 2020TCD05 (Pause on recruitment)
Title: Mobile apps for advanced language learning: speaking and listening
Supervision Team: Elaine Uí Dhonnchadha, TCD (Primary Supervisor) / John Kelleher, TU Dublin (External Secondary Supervisor)
Description: This PhD research targets one of the most fundamental of human needs – the ability to communicate successfully. Language learners often find it difficult to get domain-specific practice in speaking and listening in a target language prior to arriving in the destination country. This research will focus on providing domain/subject specific listening and/or speaking practice for academic English on mobile platforms. To improve listening skills natural language processing techniques can be used to source academic materials that are relevant to the student’s area of study in order to generate listening/reading exercises (using text-to-speech synthesis or recordings). To improve speaking skills, current strategies in computer aided pronunciation teaching (CAPT), include generating model sentences (targeting known language pair difficulties) using a model of the learner’s own voice together with a target native accent. Possible avenues of exploration include deep neural networks using syllable-based prosodic features and/or phonological feature embeddings along with the use of transfer learning for mispronunciation detection. The systems developed will be deployed on mobile platforms to provide convenient, personalized and targeted learning support for the language learner.


Code: 2020TCD06 (position filled)
Title: My voice matters – extending high performance speech interfaces to the widest possible audience
Supervision Team: Naomi Harte, TCD (Primary Supervisor) / John Kelleher, TU Dublin (External Secondary Supervisor)
Description: The performance of speech interfaces continues to improve at pace, with users now able to engage with technology such as Google Duplex to automatically book a restaurant. A person’s ability to enter a world full of speech-interface driven technology depends directly on whether that technology works well for their own speech. Many users, such as those with speech impediments, the elderly, young children, and non-native speakers can become excluded. This PhD will explore ways to improve performance in speech interfaces for marginalised users. A fundamental understanding of how speech from these users is different gives us the best opportunity to guide deep-learning systems to solutions that serve a wider range of speakers. We need to discover what, and how, DNNs learn from speech, and leverage this to develop models with a greater ability to understand less-encountered speaking styles. This PhD will contribute fundamental ideas both in speech understanding, and in interpretable and adaptable AI. This PhD will be aligned with the sponsorship by Sonas Innovation (http://sonasi.com) of D-REAL PhDs, and will also benefit from research ongoing within the SFI ADAPT Research Centre and the Sigmedia Research Group at TCD.


Code: 2020TCD07
Title: Multi-Lingual Lip-Synch for photorealistic virtual humans
Supervision Team: Rachel McDonnell, TCD (Primary Supervisor) / Peter Corcoran, NUIG (External Secondary Supervisor)
Description: This project will investigate synthesizing plausible animations and behaviours for photorealistic conversing virtual characters. In particular, we will investigate using advanced computer graphics and animation techniques in combination with deep learning methods to improve the naturalness of character animation and speech synthesis, by predicting visual speech features and non-verbal behaviours such as head movement and gestures from a linguistic input. A large database of training data will be created using photogrammetry to capture 3D scans of humans, in combination with motion capturing their performances (facial movements, body movements, audio, etc.). The data will be used to train generative adversarial networks (GAN) for synthesizing realistic character movements. The focus will be on high quality multi-lingual lip animations that will lead to better user experiences in a wide range of applications such as computer games, movies, and intelligent assistants, etc. The Ph.D. student will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner.


Code: 2020TCD08 (position filled)
Title: Personalised Support for Reconciling Health and Wellbeing Information of Varying Complexity and Veracity Towards Positive Behavioural Change
Supervision Team: Owen Conlan, TCD (Primary Supervisor) / Jane Suiter, DCU (External Secondary Supervisor)
Description: This project will introduce a new approach to visual scrutability that can facilitate users in examining complex and sometimes conflicting information, specifically in the context of personal health-care, towards changing behavior to improve their health. The research will examine how scrutability, an approach to facilitating the inspection and alteration of user models that underpins a system’s interaction with a user, may provide a means for empowering users to interact with such difficult to reconcile information in order to promote behaviour change. Visual metaphors can empower the user to scrutinise and reflect on their understanding development, knowledge acquisition and the validity of different sources of information. Defining a new approach that will enable users to reflect upon and control what information they consume in a personalised manner is proposed as a key element in fostering enhanced understanding of their own health-care and wellbeing. This research will build on results from the H2020 Provenance and ProACT projects.


Code: 2020TCD09 (position filled)
Title: Embody Me: Achieving Proxy Agency through Embodiment in Mixed Reality
Supervision Team: Carol O’Sullivan, TCD (Primary Supervisor) / Gearóid Ó Laighin, NUIG (External Secondary Supervisor)
Description: An important factor in achieving believable and natural interactions in Virtual and Mixed Reality systems is the sense of personal agency, i.e., when a user feels that they are both controlling their own body and affecting the external environment e.g., picking up a ball and throwing it. The most natural way to give a sense of agency and control to a user is to enable them to use their own natural body motions to effect change in the environment. However, in restricted spaces or if the user has a disability, this may not always be possible. In this PhD project, we will investigate the effects of different strategies for non-direct agency in the environment, from simple device inputs, through to controlling an embodied virtual agent. This will involve animating the motions of a virtual avatar, and controlling the motions of this avatar using a variety of different methods (e.g., game controller, gesture, voice).


Code: 2020TCD10 (position filled)
Title: Hand-object manipulation tracking using computer vision
Supervision Team: Gerard Lacey, TCD (Primary Supervisor) / Alistair Sutherland, DCU (External Secondary Supervisor)
Description: Current hand tracking for VR/AR interfaces focuses on the manipulation of virtual objects such as buttons, sliders and knobs. Such tracking is most often based on tracking each hand independently and when hands become partially occluded or are grasping a real object the hand tracking often fails. Tracking the hands during the manipulation of real-world objects opens up AR/VR to much richer forms of interaction and would provide the basis for activity recognition and the display of detailed contextual information related to the task at hand. This PhD project involves researching the tracking of unmodified hands with an ego-centric camera (2D and 3D) in the presence of partial occlusions. Technologies will include the use of deep learning models in combination with 3D models to determine hand pose in the presence of occlusion. Our approach will also exploit high level knowledge about object affordances and common hand grasp configurations which is commonly used in Robotic grasping.


Code: 2020TCD11
Title: AI driven Digital Companions for human empowerment
Supervision Team: Vincent Wade, TCD (Primary Supervisor) / Robert Ross, TU Dublin (External Secondary Supervisor)
Description: The pervasive availability of multimodal digital media allows for ubiquitous user enrichment in their work and leisure life. However, the constant exposure to web, emails, text messages, audio and video content is creating a mentally fatigued user with low attention span, concentration deficiencies and consequently low knowledge acquisition opportunities. This project will focus on personalized multimodal data absorption and transfer in future working environments to improve cognitive behavioural patterns by leveraging the engagement of all senses. The underpinning work will focus on devising AI models for a systematic transformation of big data into “easy to absorb units of knowledge” which would become the basic modular blocks across the multimodal sources of data to match the real-time and personalized cognitive profile of a user. The research will provide key components to an intelligent, aware infrastructure which is embedded into work and home environments. Such infrastructure is becoming increasingly invisible or “hidden-in-plain-sight”. This PhD project will focus on the combination of rich AI services, multi-modal interaction and their application in personalised digital companions. This project is part-funded by Nokia Bell Labs.


Code: 2020TCD12 (no longer accepting applications)
Title: Multi-Perspectivity in Next-Generation Digital Narrative Content
Supervision Team: Mads Haahr, TCD (Primary Supervisor) / Marguerite Barry, UCD (External Secondary Supervisor)
Description: Stories and storytelling are crucial to the human experience as well as to the creation of meaning individually and socially. However, today’s most pressing issues, such as climate change and the refugee crisis, feature multilateral perspectives with different stakeholders, belief systems and complex interrelations that challenge traditional ways of narrative representation. Existing conventions (e.g., in news and on social media) lack the expressive power to capture these complex stories and too easily become prone to oversimplified presentation of complex material – even fake news – resulting in polarization of populations. Taking its starting point in the System Process Product (SPP) model developed by Koenitz (2015), this research will develop a narrative architecture useful for structuring multi-perspective narrative content and evaluate it through the creation of multi-perspective narratives, at least one of which will be a VR/AR/MR experience.


Code: 2019TCD04 (position filled)
Title: VRFaces: Next-generation performance capture for digital humans in immersive VR
Supervision Team: Rachel McDonnell, TCD (Primary Supervisor) / Noel O’Connor, DCU (External Secondary Supervisor)
Description: It is expected that immersive conferencing in virtual reality will replace audio and video in the future. By using embodied avatars in VR to act as digital representations of meeting participants, this could revolutionize the way business is conducted in the future, allowing for much richer experiences incorporating the interpersonal communication that occurs through body language and social cues. However, creating virtual replicas of human beings is still one of the greatest challenges in the field of Computer Graphics. The aim of this project is to advance the animation technology considerably to allow a user to truly “become” their virtual character, feeling ownership of their virtual face, with near cinema quality facial animation.
Note: This project is part of cohort 1 and applicants may start earlier than September 2020. For this position we will only be able to pay EU/EEA fees.


Code: 2020DCU02 (position filled)
Title: Improving Open Domain Dialogue Systems and Evaluation
Supervision Team: Yvette Graham, TCD (Primary Supervisor)
Description: Do you wish Alexa was more reliable, entertaining and funny? Dialogue systems, such as Alexa, are currently incapable of communicating in a human-like way and this is one of the grandest challenges facing Artificial Intelligence. This project involves developing new approaches to dialogue systems that will allow the systems we interact with every day to become more personable and easier to communicate with. The focus will be on examining how existing dialogue systems work and where they need improvement. The project will also look at developing ways of giving systems more personality, making them better at responding to instructions and even making them more entertaining for users to interact with.



Technological University Dublin

Code: 2020TUD01 (position filled)
Title: Fatigue monitoring and prediction in Rail applications (FRAIL)
Supervision Team: Maria Chiara Leva, TU Dublin (Primary Supervisor) / Sam Cromie, TCD (External Secondary Supervisor)
Description: The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. The objective of this project is to test the most relevant alternative mobile wearable and non-wearable unobtrusive technologies to monitor fatigue in three different working environments in Irish Rail (e.g. considering train driving, signaling and maintenance tasks) and to develop a protocol to combine biosensor and or mobile performance data acquisition (e.g. mobile version of the Psychomotor Vigilance Task, unobtrusive eye tracking devices, wearable HRV measurement devices, physical activity monitoring, mobile EEG etc.) related to fatigue assessment with self-reported journal/self-assessment data to help operator and organizations to monitor levels of fatigue experienced. The project will ultimately deliver a proposal for a user-friendly tool for monitoring fatigue to assist with continuous improvement in the area of Fatigue Risk Management.


Code: 2020TUD02 (position filled)
Title: Creating plausible speech-driven conversational characters and gestures
Supervision Team: Cathy Ennis, TU Dublin (Primary Supervisor) / Rachel McDonnell, TCD (External Secondary Supervisor)
Description: Interaction with virtual characters has provided increased engagement and opportunities for immersion for players in a social context for many purposes. Online gaming and Augmented and Virtual Reality applications provide a space for embodied interaction; players/users can be represented by a virtual avatar, and exchanges become more engaging. Plausible avatars play a key role in creating VR environments that allow the user to feel high levels of immersion. However, the requirements of real-time dynamic interactions pose a serious challenge for developers; real-time behaviour and animation for these characters is required in scenarios where it is unknown what types of behaviours might be required. One way to enhance this is to ensure that the characters they are represented by and engaging with behave plausibly. We aim to tackle part of this problem by investigating ways to generate plausible non-verbal social behaviours (such as conversational body motion and gestures) for these characters.


Code: 2020TUD03 (position filled)
Title: Privacy-preserving Pedestrian Movement Analysis in Complex Public Spaces
Supervision Team: Bianca Schoen-Phelan, TU Dublin (Primary Supervisor) / Mélanie Bouroche, TCD (External Secondary Supervisor)
Description: Smart cities should encourage the use of walking as it is one of the most sustainable and healthiest modes of transport. However, designing public spaces to be inviting to pedestrians is an unresolved challenge due to the wide range of possible routes and complex dynamics of crowds. New technologies such as Internet of Things (IoT), video analysis, and infrared sensing provide an unprecedented opportunity to analyse pedestrian movements in much greater detail. Any data captured for analysis must also protect the privacy of pedestrians to avoid identification via direct imagining or movement patterns. This project pushes the state-of-the-art pedestrian movement analysis by proposing the use of 3D multi-modal data from outdoor locations for quantitative wireframe representations of individuals as well as groups, which involves crowd movement simulation, IOT data capture, privacy-preserving data analytics, the smart city paradigm and health and wellness.


Code: 2020TUD04 (position filled)
Title: Modeling cognitive load with EEG and deep learning for human computer interaction and instructional design
Supervision Team: Luca Longo, TU Dublin (Primary Supervisor) / Ann Devitt, TCD (External Secondary Supervisor)
Description: This project will focus on multi-disciplinary research in the area of Cognitive Load (CL) modeling. It aims at constructing an interpretable/explainable model of CL for real-time prediction of task performance. It will allow human-centred designers in HCI and Education to develop, personalize and rapidly test their interfaces, instructional material and procedures in a way they are aligned to the limitation of the human mental capacity, maximising human performance. The novelty lies in the use of Deep Learning methods to automatically learn complex non-linear representations from EEG, moving beyond the knowledge-driven approaches that have produced hand-crafted deductive knowledge. A challenging task is to translate these representations into human-interpretable forms, a well-known issue in Explainable Artificial Intelligence. To tackle this, modern methods for automatic rules extraction from deep-learning models will be employed, with symbolic, argumentative reasoning methods, to bring these rules together in a highly accessible, explainable/interpretable model of CL.


Code: 2020TUD05 (position filled)
Title: Applying Genetic Evolution Techniques to the training of Deep Neural Networks
Supervision Team: John D. Kelleher, TU Dublin (Primary Supervisor) / Peter Corcoran, NUIG (External Secondary Supervisor)
Description: Deep learning has improved the state of the art across a range of digital content processing tasks. However, the standard algorithm for training neural, the backpropagation algorithm can encounter different types of challenges depending on the network architecture that it used to train. This project will focus on developing novel training algorithms for deep neural networks that can be shown to improve on backpropagation in terms of either final model accuracy, or in terms of computational considerations (such as training time and/or data usage). Furthermore, these training algorithms will be tested across are range of different use cases (e.g., image processing, natural language processing) and network architectures so as to validate the general usefulness of the approach. The initial approach taken to develop these novel training algorithms will be to explore the use of genetic search algorithms. This project is part-funded by Nokia Bell Labs.
Note: The deadline for applications for this project is 16.00 on Friday 17th April 2020.



University College Dublin

Code: 2020UCD01 (position filled)
Title: Accommodating Accents: Investigating accent models for spoken language interaction
Supervision Team: Julie Berndsen, UCD (Primary Supervisor) / Robert Ross, TU Dublin (External Secondary Supervisor)
Description: The recognition and identification of non-native accents is fundamental to successful human-human speech communication and to communication between humans and machines. Much of current speech recognition now uses deep learning but a recent focus on interpretability allows for a deeper investigation of the common properties of spoken languages and language varieties which underpin different accents. This PhD project proposes to focus on what can be learned about non-canonical accents and to appropriately adjust the speech to accommodate the (machine or human) interlocutors by incorporating results of existing perceptual studies. The project aims to advance the state-of-the-art in spoken language conversational user interaction by exploiting the speaker variation space to accommodate non-native or dialect speakers. This will involve research into what are the salient phonetic properties identified during speech recognition that relate to non-canonical accents, and how can the speech or further dialogue be adjusted to ensure successful communication.


Code: 2020UCD02 (position filled)
Title: Machine Learning for Financial Asset Pricing
Supervision Team: Thomas Conlan, UCD (Primary Supervisor) / John Kelleher, TU Dublin (External Secondary Supervisor)
Description: Asset pricing is concerned with understanding the drivers of asset prices, helping investors to better understand risks underpinning asset allocation. This research will employ machine learning (ML) techniques to uncover new links between economic fundamentals and asset prices, allowing the identification of mis-priced securities. ML-based techniques, such as dimensionality reduction, deep learning, regression trees and cluster analysis have helped uncover complex non-linear associations across multiple fields but remain relatively unexplored in the field of financial asset pricing. In this research, improved asset pricing precision will result from discerning between long-run fundamentals and short-run fluctuations. Economic intuition will be developed through the use of interpretable ML. The research has direct FinTech related applications, including to the fields of asset management, trading strategies and risk management.


Code: 2020UCD03 (position filled)
Title: Blended Intelligence and Human Agency
Supervision Team: David Coyle, UCD (Primary Supervisor) / Gavin Doherty, TCD (External Secondary Supervisor)
Description: In cognitive-science the sense of agency is defined as the experience of controlling one’s actions and, through this control, effecting the external world. It is a crosscutting experience, linking to concepts such as free-will and causality and having a significant impact on how we perceive the world. This project will investigate people’s sense of agency when interacting with intelligent systems (e.g. voice agents). Whereas past research has explored situations where actions can be clearly identified as voluntary or involuntary, intelligent environments blur this distinction. E.g. intelligent systems often interpret our intentions and act on our behalf. How this blending of intention/action impacts the sense of agency is poorly understood. The project is suitable for a candidate interested in Human Computer Interaction. They will develop speech and VR systems that require human-computer cooperation and conduct studies to assess the impact of blended intelligence on people’s experience of agency. The research has direct implications for technologies ranging from driverless cars to intelligent personal assistants in phones.


Code: 2020UCD04 (position filled)
Title: Designing speech agents to support mental health
Supervision Team: Benjamin Cowan, UCD (Primary Supervisor) / Gavin Doherty, TCD (External Secondary Supervisor)
Description: Voice User Interfaces (VUIs) could be highly effective for delivering mental health interventions, allowing users to disclose sensitive information and engage without judgment. This PhD will fuse knowledge from HCI , psycholinguistics, sociolinguistics, speech technology and mental health research to identify how to most effectively design VUIs for mental health applications. In particular, the PhD will focus on how 1) speech synthesis; 2) the linguistic content of utterances; 3) the type of dialogue used by such agents impact engagement in mental health intervention. This high impact research topic will break new ground by demonstrating the impact of speech interface design choices on user experience and user engagement in this context. The successful candidate will be based at University College Dublin (UCD) and will be part of the HCI@UCD group.


Code: 2020UCD05 (position filled)
Title: Risk Measurement at and for Different Frequencies
Supervision Team: John Cotter, UCD (Primary Supervisor) / John Kelleher, TU Dublin (External Secondary Supervisor)
Description: There are many risk events associated with trading that have affected markets, traders and institutions. These can occur very quickly or evolve more slowly over longer horizons. A common feature of these events is a lack of anticipation of the magnitudes of losses and the lack of controls in place to provide protection. A further common feature is that these can be large scale events that are very costly and often systemic in nature. This project will apply alternative risk measures in setting margin requirements for future trading, capital requirements for trading, and price limits and circuit breakers, to protect against extreme price/volume movements. The project will employ AI/ML techniques along with other econometric principles, to risk measurement and management. This project will look to identify strengths and weaknesses in applying AI/ML approaches in modelling financial risk, and especially systemic risk.