Maryam Basereh

Maryam Basereh

Title: Automatic Transparency Evaluation for Open Knowledge Extraction Systems

Supervision Team: Rob Brennan, UCD / Gareth Jones, DCU

Description: Open Knowledge Extraction (OKE) is the automatic extraction of structured knowledge from unstructured/semi-structured text and representing and publishing the knowledge as Linked Data (Nuzzolese et al. 2015). Due to their scalability in searching and extracting knowledge, the use of OKE systems as the fundamental component of advanced knowledge services is growing. However, similar to a lot of other modern AI-based systems, most OKE systems use non-transparent algorithms. This  means that their processes and outputs are not understandable and explainable, the system’s accountability cannot be guaranteed, and in case of any adverse outcome, explanations cannot be provided. Transparency is one of the AI governance main components, which is necessary for accountability (Diakopoulos 2016, Reddy et al. 2020, Lepri et al. 2018, Winfield et al. 2019). GDPR also requires transparency by affirming “The right to explanation” and restricting automated decision-making (Goodman and Flaxman 2017). In order to enhance the transparency of OKE systems, it is important to be able to evaluate their transparency, automatically. Due to the lack of research in this area and the importance of transparency, the focus of this research is on studying automatic transparency evaluation for OKE systems.