d-real is funded by Science Foundation Ireland and by the contributions of industry partners.
Student Name: Bharat Agarwal
Title: Energy-Oriented and Quality-Aware Network Path Selection to Support Differentiated Adaptive Multimedia Delivery in Heterogeneous Mobile Network Environments
Supervision Team: Gabriel-Miro Muntean, DCU / Marco Ruffini, TCD
Description: The existing various technology-based wireless networks provided by businesses, public institutions, etc. establish a perfect heterogeneous wireless network environment which can provide ubiquitous connectivity. Since mobile device owners are much of the time on the move and use them anywhere and anytime, energy efficiency is of paramount importance when delivering data to mobile devices in general, and in particular rich media content. Moreover, the users always prefer high quality content, which requires more energy to transmit and process. Consequently there is a need of a trade-off between quality and energy levels. This project will develop an energy-oriented mechanism to select suitable delivery paths for multimedia streaming services in heterogeneous wireless network environments in order to both save energy and maintain high levels of service quality.
Student Name: Kunchala Anil
Title: Privacy-preserving Pedestrian Movement Analysis in Complex Public Spaces
Supervision Team: Bianca Schoen-Phelan, TU Dublin / Mélanie Bouroche, TCD
Description: Smart cities should encourage the use of walking as it is one of the most sustainable and healthiest modes of transport. However, designing public spaces to be inviting to pedestrians is an unresolved challenge due to the wide range of possible routes and complex dynamics of crowds. New technologies such as Internet of Things (IoT), video analysis, and infrared sensing provide an unprecedented opportunity to analyse pedestrian movements in much greater detail. Any data captured for analysis must also protect the privacy of pedestrians to avoid identification via direct imagining or movement patterns. This project pushes the state-of-the-art pedestrian movement analysis by proposing the use of 3D multi-modal data from outdoor locations for quantitative wireframe representations of individuals as well as groups, which involves crowd movement simulation, IOT data capture, privacy-preserving data analytics, the smart city paradigm and health and wellness.
Student Name: Seth Grace Banaga
Title: Embody Me: Achieving Proxy Agency through Embodiment in Mixed Reality
Supervision Team: Carol O’Sullivan, TCD / Gearóid Ó Laighin, NUIG
Description: An important factor in achieving believable and natural interactions in Virtual and Mixed Reality systems is the sense of personal agency, i.e., when a user feels that they are both controlling their own body and affecting the external environment e.g., picking up a ball and throwing it. The most natural way to give a sense of agency and control to a user is to enable them to use their own natural body motions to effect change in the environment. However, in restricted spaces or if the user has a disability, this may not always be possible. In this PhD project, we will investigate the effects of different strategies for non-direct agency in the environment, from simple device inputs, through to controlling an embodied virtual agent. This will involve animating the motions of a virtual avatar, and controlling the motions of this avatar using a variety of different methods (e.g., game controller, gesture, voice).
Student Name: Dipto Barman
Title: Personalised Support for Reconciling Health and Wellbeing Information of Varying Complexity and Veracity Towards Positive Behavioural Change
Supervision Team: Owen Conlan, TCD / Jane Suiter, DCU
Description: This project will introduce a new approach to visual scrutability that can facilitate users in examining complex and sometimes conflicting information, specifically in the context of personal health-care, towards changing behavior to improve their health. The research will examine how scrutability, an approach to facilitating the inspection and alteration of user models that underpins a system’s interaction with a user, may provide a means for empowering users to interact with such difficult to reconcile information in order to promote behaviour change. Visual metaphors can empower the user to scrutinise and reflect on their understanding development, knowledge acquisition and the validity of different sources of information. Defining a new approach that will enable users to reflect upon and control what information they consume in a personalised manner is proposed as a key element in fostering enhanced understanding of their own health-care and wellbeing. This research will build on results from the H2020 Provenance and ProACT projects.
Student Name: Shubhajit Basak
Title: Advanced Facial Models for Rendering of Virtual Humans
Supervision Team: Michael Schukat, NUIG / Rachel McDonnell, TCD
Description: This PhD project will build on current state-of-the-art applying advanced facial generation techniques from deep learning, such as StyleGAN, to build photo-realistic ‘random’ 3D facial models that can be subsequently rendered to multiple 2D poses. A particular focus of this research will be on the photorealistic rendering of facial textures at NIR and LWIR frequencies so that these models can generate multi-wavelength representations. In addition this work will seek to go beyond existing facial datasets by rendering additional ground truth information from the 3D animations.
Student Name: Maryam Basereh
Title: Socio-Technical Governance Framework for Clinical Risk Data Analytics
Supervision Team: Rob Brennan, DCU / Siobhán Corrigan, TCD
Description: Unexplainable data analytics like deep learning are being rapidly promoted for deployment into healthcare risk management systems where there is already a chronic lack of effective governance for operational risks. For example, avoidable harm to patients costs up to 16% of total hospital expenditure according to the OECD and cost €194m in Ireland in 2009 (Rafter et al. 2016). The Scally Report on Cervical Check in Ireland is severely critical of poor governance of risk. Since any credible risk management system must include human factors as well as technical risks, in this project we will examine the impact of the widespread availability of medical data analytics on risk from a socio-technical perspective and design appropriate risk data analytics governance methods and tools that can cope with this disruptive technology.
Student Name: Dan Bigioi
Title: Multi-Lingual Lip-Synch – Text to Speech & Audio Representation
Supervision Team: Peter Corcoran, NUIG / Rachel McDonnell, TCD / Naomi Harte, TCD
Description: This project will apply some of the latest deep learning techniques to build specialised datasets and train advanced AI neural network models to deliver a real-time multi-lingual lip-synch for speakers in a video. This project will focus on conversion of text subtitles into an intermediate speech representation suitable across multiple languages (e.g. phonemes). The preparation and automated annotation of specialised datasets provides an opportunity for high-impact research contributions from this project. The researcher on this project will collaborate with a 2nd PhD student who will focus on photo-realistic lip-synching of the speech data. Both PhDs will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner. The end goal is a practical production pipeline, inspired by Obamanet, for multi-lingual over-dubbing of video content from multi-lingual subtitles.
Student Name: Bojana Bjegojević
Title: Fatigue monitoring and prediction in Rail applications (FRAIL)
Supervision Team: Maria Chiara Leva, TU Dublin / Sam Cromie, TCD / Dr Nora Balfe, TCD / Dr Luca Longo, TU Dublin
Description: The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. The objective of this project is to test the most relevant alternative mobile wearable and non-wearable unobtrusive technologies to monitor fatigue in three different working environments in Irish Rail (e.g. considering train driving, signaling and maintenance tasks) and to develop a protocol to combine biosensor and or mobile performance data acquisition (e.g. mobile version of the Psychomotor Vigilance Task, unobtrusive eye tracking devices, wearable HRV measurement devices, physical activity monitoring, mobile EEG etc.) related to fatigue assessment with self-reported journal/self-assessment data to help operator and organizations to monitor levels of fatigue experienced. The project will ultimately deliver a proposal for a user-friendly tool for monitoring fatigue to assist with continuous improvement in the area of Fatigue Risk Management.
Student Name: Anna Bleakley
Title: Designing to support common ground building in multiparty VR dialogues
Supervision Team: Benjamin Cowan, UCD / Vincent Wade, TCD
Description: Interaction in Virtual Reality often revolves around inputs such as gesture, gaze and control using physical hardware. VR is also growing as a potential way to engage in multiparty communicative tasks (such as multiparty meetings) between other users, with the possibility of engaging with a combination of virtual and avatar based agents. Yet there is little understanding to date about how the design of VR environments and avatars can be optimized for effective conversational interactions. The proposed PhD will push the boundaries of research in VR by exploring how the design of VR based experiences causally influence the development and maintenance of common ground and perceptions of shared understanding.
Student Name: Rob Bowman
Title: Designing Conversational User Interfaces for More Effective Mood Logging
Supervision Team: Gavin Doherty, TCD/ Benjamin Cowan, UCD / Anja Thieme, Microsoft Research Cambridge
Description: Self-monitoring activities, such as mood logging, are a central part of many treatments for mental health problems. They serve to help raise awareness of the person’s own feelings, daily activities and cognitive processes, and provide information that can inform future treatment. One interesting possibility for supporting such self-disclosure is through conversational user interfaces, allowing users to disclose sensitive information without judgment, facilitating more honest reflection and emotional reaction. Currently there is little understanding about 1) the opportunities and challenges that potential users see in using conversational interfaces for mood logging; 2) appropriate design parameters (e.g. appropriate dialogue structure, linguistic strategies) and their effect on user engagement; and, critically, 3) how effective this interaction would be in producing honest and frequent reporting in a real world deployment. This PhD will aim to target these three challenge areas.
This PhD is also supported by Microsoft Research through its PhD Scholarship Programme.
Student Name: Vicent Briva-Iglesias
Title: Understanding the Experience of Interactive Machine Translation
Supervision Team: Sharon O’Brien, DCU / Benjamin Cowan, UCD
Description: An increasing amount of information is being translated using Machine Translation. Despite major improvements in quality, intervention is still required by a human if quality is to be trusted. This intervention takes the form of “post-editing”, i.e. identifying and quickly fixing MT errors. This tends to be done in a serial manner, that is, the source text is sent to the MT engine, an MT suggestion appears, and the editor assesses it and fixes it, if necessary (termed “traditional” post-editing). Recently, a new modality has been developed called Interactive Machine Translation (IMT), involving real-time adaptation of the MT suggestion as the editor makes choices and implements edits. Very little is understood about this new form of human-machine interaction. This PhD will fill this knowledge gap by researching the cognitive effort involved in this type of translation production; how perceptions of system competence and trust influences decision-making; and how these evolve over time and experience.
Student Name: Sarah Carter
Title: Privacy and ethical value elicitation in smartphone-based data collection
Supervision Team: Mathieu D’Aquin, NUIG / Heike Schmidt-Felzmann, NUIG / Kathryn Cormican, NUIG / Dave Lewis, TCD
Description: Data collection is increasingly being carried out through personal mobile devices. A key challenge is dealing with privacy and ethics in a way which is meaningful to the users. GDPR compliance is an obvious aspect, with much of it being handled by existing frameworks already (i.e. consent gathering, secure data handling, etc). There is however an increasing concern around the area of data ethics that the technical handling of GDPR is not sufficient to address. In this project, we will investigate ways to introduce a continuous alignment between participants’ privacy preferences and ethical values in smartphone-based data collection.
Student Name: Orla Cooney
Title: Designing speech agents to support mental health
Supervision Team: Benjamin Cowan, UCD / Gavin Doherty, TCD
Description: Voice User Interfaces (VUIs) could be highly effective for delivering mental health interventions, allowing users to disclose sensitive information and engage without judgment. This PhD will fuse knowledge from HCI , psycholinguistics, sociolinguistics, speech technology and mental health research to identify how to most effectively design VUIs for mental health applications. In particular, the PhD will focus on how 1) speech synthesis; 2) the linguistic content of utterances; 3) the type of dialogue used by such agents impact engagement in mental health intervention. This high impact research topic will break new ground by demonstrating the impact of speech interface design choices on user experience and user engagement in this context. The successful candidate will be based at University College Dublin (UCD) and will be part of the HCI@UCD group.
Student Name: Eduardo Cueto Mendoza
Title: Model Compression for Deep Neural Networks
Supervision Team: John Kelleher, TUD / Rozenn Dahyot, Maynooth University
Description: Deep learning has revolutionized digital media analysis, be it video video, image, text, or indeed multimedia. The deep learning revolution is driven by three long-term trends: Big Data, more powerful computers (GPUs/TPUs), and ever larger models. At the same time there has been an increase in edge computing and the deployment of deep neural networks to devices that have limited resources (such as memory, energy or bandwidth). This project will explore the development of novel cost functions, tailored to deep learning models for video and image analysis; compression techniques for deep neural networks for video and image analysis; and error analysis for model compression techniques.
Student Name: Johanna Didion
Title: Blended Intelligence and Human Agency
Supervision Team: David Coyle, UCD / Gavin Doherty, TCD
Description: In cognitive-science the sense of agency is defined as the experience of controlling one’s actions and, through this control, effecting the external world. It is a crosscutting experience, linking to concepts such as free-will and causality and having a significant impact on how we perceive the world. This project will investigate people’s sense of agency when interacting with intelligent systems (e.g. voice agents). Whereas past research has explored situations where actions can be clearly identified as voluntary or involuntary, intelligent environments blur this distinction. E.g. intelligent systems often interpret our intentions and act on our behalf. How this blending of intention/action impacts the sense of agency is poorly understood. The project is suitable for a candidate interested in Human Computer Interaction. They will develop speech and VR systems that require human-computer cooperation and conduct studies to assess the impact of blended intelligence on people’s experience of agency. The research has direct implications for technologies ranging from driverless cars to intelligent personal assistants in phones.
Student Name: Johanna Dobbriner
Title: Hierarchical Policy Estimation for Multi-Modal Content Delivery in Virtual Second Language Acquisition Tutorials
Supervision Team: Robert Ross, TU Dublin / Julie Berndsen, UCD
Description: A key challenge in Second Language Acquisition is to give language learners meaningful opportunities to practice and learn while making mistakes in an accessible environment. Virtual characters have the potential to provide tuition through engagement in a one-to-one basis with the learner around experiences in the shared virtual environment. The computational modelling work of this PhD will focus on the application of state-of-the-art methods in hierarchical reinforcement learning methods to learn flexible models that control what the tutor says and how the tutor says it. This modelling will be backed by empirical work to acquire language training data from virtual and physical environments.
Student Name: Cormac (Patrick Cormac) English
Title: Accommodating Accents: Investigating accent models for spoken language interaction
Supervision Team: Julie Berndsen, UCD / John Kelleher, TU Dublin
Description: The recognition and identification of non-native accents is fundamental to successful human-human speech communication and to communication between humans and machines. Much of current speech recognition now uses deep learning but a recent focus on interpretability allows for a deeper investigation of the common properties of spoken languages and language varieties which underpin different accents. This PhD project proposes to focus on what can be learned about non-canonical accents and to appropriately adjust the speech to accommodate the (machine or human) interlocutors by incorporating results of existing perceptual studies. The project aims to advance the state-of-the-art in spoken language conversational user interaction by exploiting the speaker variation space to accommodate non-native or dialect speakers. This will involve research into what are the salient phonetic properties identified during speech recognition that relate to non-canonical accents, and how can the speech or further dialogue be adjusted to ensure successful communication.
Student Name: Megan Fahy
Title: Fostering digital social capital through design for wellbeing
Supervision Team: Marguerite Barry, UCD / Gavin Doherty, TCD / Jane Walsh, NUIG
Description: Studies in eHealth have shown that communication technologies can support intervention and treatment through the exchange of ‘social capital’ (SC), a concept from social science with a range of individual and societal benefits. Although a strong association between SC and wellbeing has been identified, there is a lack of empirical data and consistent methods to foster social capital through design. This project begins by systematically exploring the position of social capital within HCI to identify core design challenges for eHealth. Then, using participatory design methods, it will prototype technologies and develop a novel design framework based on platform independent interactive features of digital applications.
Student Name: Aisling Flynn
Title: A VR social connecting space for improved quality of life of persons with dementia
Supervision Team: Dympna Casey, NUIG / Marguerite Barry, UCD / Attracta Brennan, NUIG / Sam Redfern, NUIG
Description: Reminiscence and music are two key strategies used to promote social connectedness and reduce loneliness. Listening to and sharing music and recalling events from the past connect the person with dementia again to the present enabling them to converse, interact and socialize. This research will create a set of meaningful multi-user VR spaces for people with dementia focused on providing opportunities to reminisce and to engage in music-based activities. VR design skills will mix with user-centric design and public and patient involvement (PPI) to deliver an effective set of meaningful VR experiences that can be widely deployed to benefit persons living with dementia.
Student Name: Frank Fowley
Title: Towards a Translation Avatar for ISL – TrAvIS
Supervision Team: Anthony Ventresque, UCD / Carol O’Sullivan, TCD
Description: Members of the Deaf community face huge barriers to access of essential services including health, education, and entertainment. Many people in the Deaf community have a low level of English literacy so although the Internet and other technology have improved access for some, the majority of the people in Ireland who use Irish Sign Language (ISL) as their first and/or preferred language struggle to access vital services. The aim of this PhD project is to propose a tool that translates spoken English to ISL and vice versa in real-time using a virtual avatar to improve accessibility and raise awareness of Sign Language and the issues faced by the Deaf community. This will involve research in Computer Vision, Machine Learning, Computational Linguistics, Audio Processing and Virtual Character Animation.
Student Name: Yasmine Guendouz
Title: Using Machine Learning to identify the critical features of carotid artery plaque vulnerability from Ultrasound images
Supervision Team: Caitríona Lally, TCD / Catherine Mooney, UCD
Description: Over one million people in Europe have severe carotid artery stenosis, which may rupture causing stroke, the leading cause of disability and the third leading cause of death in the Western World. This project aims to develop a validated means of assessing the predisposition of a specific plaque to rupture using Ultrasound (US) imaging. Using machine learning (ML) techniques, we will look at multiple US modalities concomitantly; including B-mode images, elastography and US imaging with novel contrast agents for identifying neovascularisation. Combining this with in silico modelling of the vessels will provide us with a unique capability to verify the clinical and US findings by looking at the loading on the plaques and therefore the potential risk of plaque rupture. Proof of the diagnostic capabilities of ML and non-invasive, non-ionising US imaging in vivo for the diagnosis of vulnerable carotid artery disease would be a ground-breaking advancement in early vascular disease diagnosis.
Student Name: Rolando Hanlon
Title: Enhancing Clinicians ability to pull together trusted data at the right time
Funding: This PhD is sponsored by Sonas Innovation (www.sonasi.com) and supported by the Irish Health Service Executive (HSE) Digital Academy
Supervision Team: Declan O’Sullivan, TCD / Marguerite Barry, UCD
Description: Clinicians and clinician scientists are increasingly faced with dealing with the challenge of integration of diverse data sources related to patients. These include sensitive personal clinical data, patient generated data (e.g. via an app), third party curated data (e.g. registry, biomarker data) and third party services data (e.g. geolocation, environment data etc.). Coupled to this, the quality of the data being integrated needs to be verified, and the source needs be verified as trustworthy on an ongoing basis. This PhD will explore how to engage the clinician/scientist in the data integration process, given the constant evolution of the data and data sources. It will advance the state of the art in the areas of data integration, personalisation and user engagement. This PhD will uniquely leverage the expertise of staff from the sponsors Sonas Innovation and supported by the Irish Health Service Executive (HSE) Digital Academy.
Student Name: David Healy
Title: Using digitally-enhanced reality to reduce sedentary behaviour at home in an elderly population
Supervision Team: Jane Walsh, NUIG / Owen Conlan, TCD
Description: It is well known that regular physical activity (PA) limits the development and progression of chronic diseases and disabling conditions. However, time spent in sedentary behaviour (SB) has increased substantially over the last three decades and increases with age. The project will explore health behaviour change from a behavioural science perspective using the ‘person-based approach’ and will develop appropriate personalised behaviour change techniques (BCTs) integrated into VR systems to effectively reduce sedentary behaviour in older adults at home.
Student Name: Darragh Higgins
Title: VRFaces: Next-generation performance capture for digital humans in immersive VR
Supervision Team: Rachel McDonnell, TCD / Benjamin Cowan, UCD
Description: It is expected that immersive conferencing in virtual reality will replace audio and video in the future. By using embodied avatars in VR to act as digital representations of meeting participants, this could revolutionize the way business is conducted in the future, allowing for much richer experiences incorporating the interpersonal communication that occurs through body language and social cues. However, creating virtual replicas of human beings is still one of the greatest challenges in the field of Computer Graphics. The aim of this project is to advance the animation technology considerably to allow a user to truly “become” their virtual character, feeling ownership of their virtual face, with near cinema quality facial animation.
Student Name: Zhannur Issayev
Title: Risk Measurement at and for Different Frequencies
Supervision Team: John Cotter, UCD / Tom Conlon, UCD
Description: There are many risk events associated with trading that have affected markets, traders and institutions. These can occur very quickly or evolve more slowly over longer horizons. A common feature of these events is a lack of anticipation of the magnitudes of losses and the lack of controls in place to provide protection. A further common feature is that these can be large scale events that are very costly and often systemic in nature. This project will apply alternative risk measures in setting margin requirements for future trading, capital requirements for trading, and price limits and circuit breakers, to protect against extreme price/volume movements. The project will employ AI/ML techniques along with other econometric principles, to risk measurement and management. This project will look to identify strengths and weaknesses in applying AI/ML approaches in modelling financial risk, and especially systemic risk.
Student Name: Bilal Alam Khan
Title: Driver sensing & inclusive adaptive automation for older drivers
Supervision Team: Sam Cromie, TCD / Chiara Leva, TU Dublin
Description: The proportion of over 65s in the population is growing; by 2030 a quarter of all drivers will be older than 65. At the same time transport is being transformed with connected, automated and electric vehicles. This project will take a user-centred design approach to understanding the needs of older drivers exploring how these could be addressed through driver sensing and adaptive automation. Progress beyond the state of the art will include a technology strategy for inclusive personalized multimodal Human Machine Interface (HMI) for older drivers and an inclusive standard for driver sensing and/or HMI for connected and autonomous vehicles.
Student Name: Siobhán Lenihan
Title: Value-led design for personalized enhanced reality applications
Supervision Team: Heike Felzmann, NUIG / Marguerite Barry, UCD
Description: This project will investigate the ethical, legal and HCI aspects associated with the personalisation of enhanced reality and virtual reality applications, with the aim to identify relevant concerns and develop potential solutions for problematic uses of those technologies. The project will draw on use cases from NUIG groups in the field of eHealth and smart city applications from a Value-Sensitive Design perspective. It aims to identify relevant value-related concerns for specific applications and explore the potential generalizability to other application domains in the field of enhanced and virtual reality.
Student Name: Jiawen Lin
Title: Improving Open Domain Dialogue Systems and Evaluation
Supervision Team: Yvette Graham, TCD / Benjamin Cowan, UCD
Description: Do you wish Alexa was more reliable, entertaining and funny? Dialogue systems, such as Alexa, are currently incapable of communicating in a human-like way and this is one of the grandest challenges facing Artificial Intelligence. This project involves developing new approaches to dialogue systems that will allow the systems we interact with every day to become more personable and easier to communicate with. The focus will be on examining how existing dialogue systems work and where they need improvement. The project will also look at developing ways of giving systems more personality, making them better at responding to instructions and even making them more entertaining for users to interact with.
Student Name: Farzin Matin
Title: ASD Augmented: Influencing pedagogical perspectives and practices
Supervision Team: Eleni Mangina, UCD / Aljosa Smolic, TCD
Description: This project begins with the hypothesis that the emerging technology of Augmented Reality (AR) will influence the pedagogical perspectives and practices for students with ASD. Research studies indicate that students with autism choose majors in Science, Technology, Engineering and Maths (STEM) at higher rates than students in the general population. They are “looking for patterns, and in Science it is natural to look for patterns that reflect natural law”. The aim is to identify the impact of AR in concentration for students diagnosed with ASD.
Student Name: Shane McCully
Title: Psychological self-report with wearable technology
Supervision Team: Gavin Doherty, TCD / David Coyle, UCD
Description: There has been much interest recently in the design of technologies to support the delivery of mental healthcare, looking at many different aspects ranging from assessment and diagnosis, through intervention and long term self-monitoring. Asking people how they feel (self-report) is an important part of many mental health interventions. This ranges from short questions about how people are doing “in-the-moment” to formal psychological assessment. This PhD will investigate ways of improving engagement with psychological self report using wearables such as smartwatches, and will look at how intelligent notification systems might improve the response rate to prompts.
Student Name: Jack Millist
Title: Applying Genetic Evolution Techniques to the training of Deep Neural Networks
Supervision Team: John D. Kelleher, TU Dublin / Peter Corcoran, NUIG
Description: Deep learning has improved the state of the art across a range of digital content processing tasks. However, the standard algorithm for training neural, the backpropagation algorithm can encounter different types of challenges depending on the network architecture that it used to train. This project will focus on developing novel training algorithms for deep neural networks that can be shown to improve on backpropagation in terms of either final model accuracy, or in terms of computational considerations (such as training time and/or data usage). Furthermore, these training algorithms will be tested across are range of different use cases (e.g., image processing, natural language processing) and network architectures so as to validate the general usefulness of the approach. The initial approach taken to develop these novel training algorithms will be to explore the use of genetic search algorithms.
Student Name: Anwesha Mohanty
Title: An Intelligent Diagnostic System for Classifying Dermatological Conditions using 3D Computer Vision
Supervision Team: Hossein Javidnia, DCU / Alistair Sutherland, DCU / Rozenn Dahyot, Maynooth University
Description: Skin diseases are now increasing globally. A fast, accurate and low-cost system for diagnosis would be very beneficial, especially in developing countries. The accurate detection of skin lesions, inflammation and the different subtypes of diseases such as rosacea and seborrheic dermatitis is vital for early treatment and medication. In this research, a triple stage approach will be carried out, which focuses on 3D Computer Vision, Image Processing and Machine learning. The aim of this project is to identify skin disorders for subtypes of rosacea and other skin conditions by establishing an image-based Diagnosis System using 3D Computer Vision, Machine Learning and Artificial Intelligence. The system should be easily usable by both specialist clinicians and by general practitioners (GPs).
Student Name: Théo Morales
Title: Hand-object manipulation tracking using computer vision
Supervision Team: Gerard Lacey, TCD / Alistair Sutherland, DCU
Description: Current hand tracking for VR/AR interfaces focuses on the manipulation of virtual objects such as buttons, sliders and knobs. Such tracking is most often based on tracking each hand independently and when hands become partially occluded or are grasping a real object the hand tracking often fails. Tracking the hands during the manipulation of real-world objects opens up AR/VR to much richer forms of interaction and would provide the basis for activity recognition and the display of detailed contextual information related to the task at hand. This PhD project involves researching the tracking of unmodified hands with an ego-centric camera (2D and 3D) in the presence of partial occlusions. Technologies will include the use of deep learning models in combination with 3D models to determine hand pose in the presence of occlusion. Our approach will also exploit high level knowledge about object affordances and common hand grasp configurations which is commonly used in Robotic grasping.
Student Name: Yasmin Moslem
Title: MT system selection and recycling/fixing recycling candidates in a hybrid set-up
Funding: This PhD is sponsored by Microsoft Ireland Research
Supervision Team: Andy Way, DCU / John Kelleher, TUD
Description: Domain-tuned MT systems outperform general domain MT models, when they are used to translate in-domain data. It may not always be known in advance of translation time which domain is best suited to a particular text or sentence, and even for a known domain like software, some strings may be better translated by a general domain system. This gives rise to a number of research questions, including: Given multiple domain-tuned NMT systems, and translation candidates, how do we analyze an incoming string and determine which system will do the best translation at runtime? How do we best assess which translation candidate is the best choice? What are the best approaches for NMT? Also, if we have access to recycling (in a Translation Memory), when is a recycling match better than an MT candidate? Can NMT help fix high quality TM matches? Can a better translation candidate be found by combining elements of multiple translations, from recycling and MT systems? Can post-editing data be leveraged, e.g. a form of automatic post-editing approach?
Student Name: Prashanth Nayak
Title: Targeted Improvements for Technical Domain Machine Translation
Funding: This PhD is sponsored by Microsoft Ireland Research
Supervision Team: Andy Way, DCU / John Kelleher, TU Dublin
Description: Neural MT (NMT) offers significant improvements in overall translation quality in recent years, but even the latest models struggle with accurately translating brand names and important technical terms. How can accurate translation be ensured for brand names and terms with known approved translations, even if the training data contains alternative translations? Can contextual clues be used to force the correct translation of ambiguous terms? This PhD will focus on exploring how improved term translation can be integrated within a general domain NMT model, to make targeted improvements to the overall translation quality. The main application area is MT for custom domains, such as information technology and software localisation.
Student Name: Chun Wei Ooi
Title: Enhancing Visual and Physical Interactions in Augmented Reality
Supervision Team: John Dingliana, TCD / Cathy Ennis, TU Dublin
Description: This project deals with advancing the state-of-the-art in the rendering and simulation of high fidelity animated virtual objects in augmented reality (AR) environments. In particular, we will develop novel techniques for improving the perceived realism of interactions between real-world objects and dynamic virtual elements in real-time. To address this problem, we will investigate the use of unified adaptive-level-of detail volumetric models that will serve as proxy geometry for both the real-world environment scanned by the AR system and the virtual objects generated and simulated by the animation system.
Student Name: Alfredo Ormazabal
Title: Incorporating patient-generated health data into clinical records
Supervision Team: Lucy Hederman, TCD / Damon Berry, TU Dublin
Description: Patient-generated health data (PGHD), that is data originating from patients or their carers, not from clinicians, is a growing feature of chronic disease care. PGHD has the potential to impact health care delivery and clinical research. This PhD will focus on informatics aspects of these challenges, exploring how to allow for the incorporation of PGHD in EHRs and clinical systems, taking account of data interoperability issues, ensuring standardisation of the non-clinical data, and the appropriate representation of metadata about quality, governance, provenance. The research will be grounded in the Irish national health system and will seek to align with the national EHR project.
Student Name: Ayushi Pandey
Title: Human Speech – How do I know it’s Real?
Supervision Team: Naomi Harte, TCD / Julie Berndsen, UCD
Description: How can you tell when speech is real, or when it is fake? This is the focus of this PhD project and it goes to the very core of the nature of human speech. Directly relating what is observable at a signal level in speech to how natural that signal is, as perceived by a human, is an unsolved problem in speech technology. This PhD addresses this gap in knowledge. The research will leverage a wealth of data from recent Blizzard speech synthesis challenges, where the naturalness of multiple world-class speech synthesis systems has been rated and made publicly available for researchers. Simultaneously, the research will also leverage shared datasets on spoofing from the automatic speaker verification community, such as those available through http://www.asvspoof.org/. The research is truly novel in that it goes beyond treating speech purely as a signal, and will bring the work to the level of investigating naturalness in continuous speech, over many seconds and sentences of generated speech.
Student Name: Breanne Pitt
Title: Multi-Perspectivity in Next-Generation Digital Narrative Content
Supervision Team: Mads Haahr, TCD / Marguerite Barry, UCD
Description: Stories and storytelling are crucial to the human experience as well as to the creation of meaning individually and socially. However, today’s most pressing issues, such as climate change and the refugee crisis, feature multilateral perspectives with different stakeholders, belief systems and complex interrelations that challenge traditional ways of narrative representation. Existing conventions (e.g., in news and on social media) lack the expressive power to capture these complex stories and too easily become prone to oversimplified presentation of complex material – even fake news – resulting in polarization of populations. Taking its starting point in the System Process Product (SPP) model developed by Koenitz (2015), this research will develop a narrative architecture useful for structuring multi-perspective narrative content and evaluate it through the creation of multi-perspective narratives, at least one of which will be a VR/AR/MR experience.
Student Name: Anastasiia Potiagalova
Title: Conversational Search of Image and Video with Augmented Labeling
Supervision Team: Gareth Jones, DCU / Benjamin Cowan, UCD
Description: The growth of media archives (including text, speech, video and audio) has led to significant interest in the development of search methods for multimedia content. A significant and rapidly expanding new area of search technology research in recent years has been conversational search (CS). In CS users engage in a dialogue with an agent which supports their search activities, with the objective of enabling them to find useful content more easily, quickly and reliably. To date, CS research has focused on text archives; this project is the first to explore CS methods for multimedia archives. An important challenge within multimedia search is formation of queries to identify relevant content. This project will seek to address this challenge by exploring the use of technologies from augmented reality to dynamically label images and video displayed within the search process, to assist users in forming more effective queries using a dialogue-based search framework.
Student Name: Darren Ramsook
Title: Video Coding Artefact Suppression Using Perceptual Criteria
Supervision Team: Anil Kokaram, TCD / Noel O’Connor, DCU
Description: Video traffic accounts for about 70% of all internet traffic now, and predictions are on track for 80% by 2022. Data compression is the only reason that video has not broken the system. However lossy video compression causes artefacts e.g. blocking and contouring which have to be removed by the video player receiving the compressed data. All of the techniques for removing these artefacts currently do not exploit visual quality criteria relevant for humans. This causes a problem for video consumed on different devices. By exploiting the visibility of artefacts on different devices, this project develops new techniques for artefact reduction that are sensitive to the human visual system, hence enabling appropriate video quality/bitrate compromises to be made with different devices.
Student Name: David Redmond
Title: Prosthetics, Virtual and Augmented Reality, and Psychosocial Outcomes
Supervision Team: Pamela Gallagher, DCU / Brendan Rooney, UCD
Description: To date in the field of rehabilitation, the application of virtual and augmented reality predominantly focuses on improving physical and clinical outcomes. There is enormous scope to learn about whether, if and how VR or AR environments impact on outcomes for people who use assistive technology (e.g. wheelchairs, prosthetics) and to explore its broader psychosocial impact. This research will capture the inter-relations between self-conceptions, identity, assistive technology integration and VR/AR environments to optimise well-being, proficiency and personally meaningful outcomes. This project will explore (1) whether and what type of VR or AR environment can be used to develop a greater integration of the person/technology to not only improve physical functioning but also psychological wellbeing; (2) what psychological outcomes are impacted (e.g. how does it shape a person’s sense of self and identity?); and (3) what type and quality of VR or AR environment would have the greatest impact on these outcomes?
Student Name: Gearóid Reilly
Title: A Multi-User VR Recreational Space for People with Dementia
Supervision Team: Sam Redfern, NUIG / Gabriel-Miro Muntean, DCU / Attracta Brennan, NUIG
Description: Dementia is one of the greatest societal and economic health challenges of the 21st century, and a number of research initiatives have proven the usefulness of VR as a therapy tool. Although removing social isolation and supporting re-connection with friends and family are central to improving outcomes for people with dementia, networked VR-based therapy technologies with an emphasis on social activity have not previously been studied. This project will create a multi-user VR space where socialization and social performance are supported. The VR space will be immersive, activity-based and facilitate multi-user interactions enabling the person to engage with a professional therapist, or their friends and family, without the logistical difficulties of physical travel. A number of interactive scenarios will be deployed and validated through user studies. Supervision is by a cross-disciplinary team of computer scientists and nurses.
Student Name: Leona Ryan
Title: Using digitally-enhanced reality to reduce obesity-related stigma
Supervision Team: Jane Walsh, NUIG / Owen Conlan, TCD
Description: Weight-related stigma is well established as a pervasive feature of societies and predicts higher risk of depression, anxiety, and suicidality, as well as greater risk of inflammation and chronic disease. Medical professionals consistently display high levels of anti-obesity bias, assume obesity suggests patient non-compliance, and admit they would prefer to avoid dealing with obese patients at all. A huge industry now exists around overcoming obesity and supporting weight management. However, much of the research suggests that reducing stigma will have a significantly greater impact on rates of obesity. The present study proposes to develop, deliver and evaluate an evidence-based VR intervention to foster empathy and reduce obesity-related stigma in target groups (e.g. medical students). This will be achieved by synergising current psychological research on empathy and stigma with state-of-the-art VR technologies. Intervention content will be developed using the ‘person-centred approach’ and outcomes assessed will include both psychological and behavioural indicators of success.
Student Name: Davoud Shariat Panah
Title: Heart health monitoring using machine learning
Supervision Team: Susan McKeever, TU Dublin / Andrew Hines, UCD
Description: The SoundGen project will deliver advanced state-of-the-art techniques for advanced and effective sound generation and mixing. This work is inspired by recent developments in image neural style transfer networks for image mixing. Sounds can be represented as spectrogram images – which have proven effective as representations for sound when used with neural network classifiers. This project proposes to use spectrograms in combination with CNNs that have been trained on a variety of sounds, to discover how specific feature maps of the CNN are associated with aspects of sound – similar as that of image neural style transfer networks.
Student Name: Position vacant
Title: Cough monitoring through audio analysis
Supervision Team: David Dorran, TU Dublin / Ivana Dusparic, TCD
Description: Cough sounds are important indicators of an individual’s health and are often used by medical practitioners to diagnose respiratory and related ailments [1-2]. Automatic detection and classification of cough sounds through the analysis of audio recordings would provide a low-cost, non-invasive approach to health monitoring for individuals and communities. The principal aim of this research is to develop robust and reliable cough detection algorithms that are applied to audio recordings obtained from microphones that are in fixed locations around homes and public spaces. Once detected the cough events will then be further analysed to identify features and patterns which can be used to inform on the health of both an individual and the wider community.
Student Name: Mayank Soni
Title: Adaptive Dialogue in Digital Assistants for Purposeful Conversations
Supervision Team: Vincent Wade, TCD / Benjamin Cowan, UCD
Description: Chatbots and Intelligent Assistants are becoming evermore ubiquitous, as natural language human-machine interfaces and are supporting a range of tasks – from information requests to commercial transactions. Although more challenging, there is growing interest in systems which can also interact in a social fashion, building a relationship with a user over time through natural seeming talk, while embedding practical tasks within this matrix of conversation. The project will investigate and implement techniques and technologies which will allow systems to seamlessly transition between topics (and the underlying domains), passing control of dialog between federated dialog managers, each trained on different domains.
Student Name: Edward Storey
Title: My voice matters – extending high performance speech interfaces to the widest possible audience
Supervision Team: Naomi Harte, TCD / John McCrae, NUI Galway
Description: The performance of speech interfaces continues to improve at pace, with users now able to engage with technology such as Google Duplex to automatically book a restaurant. A person’s ability to enter a world full of speech-interface driven technology depends directly on whether that technology works well for their own speech. Many users, such as those with speech impediments, the elderly, young children, and non-native speakers can become excluded. This PhD will explore ways to improve performance in speech interfaces for marginalised users. A fundamental understanding of how speech from these users is different gives us the best opportunity to guide deep-learning systems to solutions that serve a wider range of speakers. We need to discover what, and how, DNNs learn from speech, and leverage this to develop models with a greater ability to understand less-encountered speaking styles. This PhD will contribute fundamental ideas both in speech understanding, and in interpretable and adaptable AI. This PhD will be aligned with the sponsorship by Sonas Innovation (http://sonasi.com) of D-REAL PhDs, and will also benefit from research ongoing within the SFI ADAPT Research Centre and the Sigmedia Research Group at TCD.
Student Name: Arthit Suriyawongkul
Title: Modelling Purpose and Responsibility for Federated Governance of Data Sharing
Supervision Team: Dave Lewis, TCD / Rob Brennan, DCU / Aphra Kerr, Maynooth University
Description: Data sharing for AI training needs transparent governance and responsibilities. This research will develop semantic models for machine reasoning to help parties decide on data sharing agreements, e.g. for text, speech and video data to train medical chatbot agents. It will model data’s: personal information content; intended use; scope of processing and sharing; governance procedures; ownership rights; security protocols and quality assurance liabilities.
Student Name: Seán Thomas
Title: Creating plausible speech-driven conversational characters and gestures
Supervision Team: Cathy Ennis, TU Dublin / Rachel McDonnell, TCD
Description: Interaction with virtual characters has provided increased engagement and opportunities for immersion for players in a social context for many purposes. Online gaming and Augmented and Virtual Reality applications provide a space for embodied interaction; players/users can be represented by a virtual avatar, and exchanges become more engaging. Plausible avatars play a key role in creating VR environments that allow the user to feel high levels of immersion. However, the requirements of real-time dynamic interactions pose a serious challenge for developers; real-time behaviour and animation for these characters is required in scenarios where it is unknown what types of behaviours might be required. One way to enhance this is to ensure that the characters they are represented by and engaging with behave plausibly. We aim to tackle part of this problem by investigating ways to generate plausible non-verbal social behaviours (such as conversational body motion and gestures) for these characters.
Student Name: Duyen Tran
Title: Next Generation Search Engines
Supervision Team: Cathal Gurrin, DCU / Owen Conlan, TCD
Description: The current approach to web search is based on a decades old model of information retrieval in which the user converts an information need into a textual query and browses a result list that is minimally personalized by ranking algorithms operating over sparse personal data. Additionally, the current models are designed as closed-loop systems with the search provider having control of the user model and monetising user profiles without any involvement of, or value for, the user. With large volumes and new forms of personal data being gathered nowadays, there is a massive opportunity in this project to look beyond the current approach to web search and develop a Next Generation Search Engine that puts the user profile at the centre of the ranking algorithms, and moreover, allows the user to control how their personal profile data is used by the search engine.
Student Name: Arjun Vinayak Chikkankod
Title: Modeling cognitive load with EEG and deep learning for human computer interaction and instructional design
Supervision Team: Luca Longo, TU Dublin / Ann Devitt, TCD
Description: This project will focus on multi-disciplinary research in the area of Cognitive Load (CL) modeling. It aims at constructing an interpretable/explainable model of CL for real-time prediction of task performance. It will allow human-centred designers in HCI and Education to develop, personalize and rapidly test their interfaces, instructional material and procedures in a way they are aligned to the limitation of the human mental capacity, maximising human performance. The novelty lies in the use of Deep Learning methods to automatically learn complex non-linear representations from EEG, moving beyond the knowledge-driven approaches that have produced hand-crafted deductive knowledge. A challenging task is to translate these representations into human-interpretable forms, a well-known issue in Explainable Artificial Intelligence. To tackle this, modern methods for automatic rules extraction from deep-learning models will be employed, with symbolic, argumentative reasoning methods, to bring these rules together in a highly accessible, explainable/interpretable model of CL.
Student Name: Liang Xu
Title: Paiste Corcra: Finding the purple patches for CALL in LILAC (the Learner Language Corpus)
Supervision Team: Monica Ward, DCU / Elaine Uí Dhonnchadha, TCD
Description: Language learning is a complex task and involves a range of cognitive processes to be successful. In many countries, the teaching and learning of Less Commonly Taught Languages makes minimal use of digital technologies. There is a need to provide intelligent Digital Content to help teachers teach the subject and to help students learn it more efficiently and effectively. Learner corpora (electronic collections of language learner data) have been shown to be beneficial for language learning. Error-tagged learner corpora, where all the errors in a corpus have been annotated with standardised error tags are particularly useful but there is little data available for learners studying Less Commonly Taught Languages. This project will design and develop a Learner Language Corpus (LILAC) and based on adaptive (machine) learning. The LILAC corpus will be analysed to extract detailed error statistics and to carry out analyses of specific error types.
Student Name: Position vacant
Title: Linked geospatial data supporting Digitally Enhanced Realities
Supervision Team: Avril Behan, TU Dublin / Declan O’Sullivan, TCD
Description: As with many other complex multi-property environments, navigation around healthcare campuses is a significant challenge for a variety of stakeholders both during everyday usage (by clients, visitors, healthcare professionals, facility managers, equipment and consumables suppliers, and external contractors) and during design for construction and redesign for renovation/retrofit. This project will progress the integration of the currently diverse and unconnected geospatial, BIM and other relevant data to deliver better return on investment for both operational and development budget holders, while also developing the research capabilities of graduates and the organisations with whom this project engages (for example: Ordnance Survey Ireland, HSE).