The field of EXplainable Artificial Intelligence (XAI) is slowly progressing out of its infancy stage, with current developments describing a trend towards human-aligned explanations. But what what does that mean? In this talk, Dr. Sebastian Lapuschkin, Head of the XAI Group of the Fraunhofer Heinrich-Hertz-Institute, Berlin, Germany
is going to guide through the development of the XAI field in recent years, and discuss current future trends for the understanding AI models and model predictions.
Sebastian Lapuschkin received the Ph.D. degree with distinction from the Berlin Institute of Technology in 2018 for his pioneering contributions to the field of Explainable Artificial Intelligence (XAI) and interpretable machine learning. From 2007 to 2013 he studied computer science (B. Sc. and M. Sc.) at the Berlin Institute of Technology,
with a focus on software engineering and machine learning.
Currently, he is the Head of the Explainable Artificial Intelligence at Fraunhofer Heinrich Hertz Institute (HHI) in Berlin. He is the recipient of multiple awards, including the Hugo-Geiger-Prize for outstanding doctoral achievement and the 2020 Pattern Recognition Best Paper Award. His work is focused on pushing the boundaries of XAI, e.g, for achieving human-understandable explanations, or towards the utilization of interpretable feedback for the improvement of machine learning systems and data.
Further research interests include efficient machine learning and data analysis, data and algorithm visualization.