Building Transparency and Explainability in Predictive Process Analytics
Predictive process analytics, as a newly emerging topic in the field of process mining, has sparked widespread interest in academia and industry. Often it builds on machine learning-based models that forecast the future states of ongoing process instances by learning from the process execution history recorded in event logs. While advanced learning techniques, such as deep neural networks, are applied to enhance prediction accuracy, they have become widely recognized as “black boxes” due to their sophisticated internal computations. Consequently, applying these techniques will fail to address the need for transparency and explainability in the resulting predictive systems.
In this talk, I will start by introducing explainable machine learning and various techniques, such as model-specific interpretability and post-hoc explanations, highlighting their strengths and limitations. Next, I will present three of our research projects that focus on explainable predictive process analytics. Firstly, we apply post-hoc explainable techniques to extract explanations from selected benchmark process prediction models, and then use the explanations to inspect the prediction models. The findings reveal issues with existing process prediction models. Secondly, we devise a novel approach that incorporates an intuitive explanation mechanism into a deep learning prediction model. The approach builds intrinsic explainability in the process prediction model and generates explanations for process outcome predictions at both the process instance (case) and event levels. Thirdly, we explore the use of counterfactual techniques which have been shown to generate explanations that are more human-understandable. We propose an extension of a widely used model-agnostic counterfactual algorithm to derive milestone-aware counterfactual explanations for process prediction along process execution. In closing, I will draw upon our latest research findings and provide insights into ongoing challenges and future directions in explainable predictive process analytics.
Associate Professor Chun Ouyang is the Academic Lead in International and Engagement at the School of Information Systems and an Associate Investigator of the Centre for Data Science at Queensland University of Technology (QUT). As an active researcher in process mining, she is particularly interested in developing explainable analytics for process intelligence. She is the co-founder and leader of the "eXplainable Analytics for Machine Intelligence" research team (xami-lab.org), driven by a vision to foster fairness, transparency, and trust in data-centric Artificial Intelligent systems.
In the past decade, Associate Professor Ouyang has been awarded numerous grants and projects, including three Australian Research Council Discovery projects, two major research-industry projects funded by Australian Cooperative Research Centers, and three joint research-industry grants with Sun Yat-Sen University. To date, she has published over 100 research papers in international academic journals and conference proceedings, with a Google Scholar H-index of 30, an i10-index of 60, and a citation count of 5827 (as of September 6, 2023). She has graduated four PhD students and is currently supervising a team of six PhD candidates. Among her research endeavors, she also serves as an assessor for national research grants from the Australian Research Council and international grants from the Netherlands Organization for Scientific Research and the Austrian Science Fund.