The field of EXplainable Artificial Intelligence (XAI) is slowly progressing out of its infancy stage, with current developments describing a trend towards human-aligned explanations. But what what does that mean? In this talk, Dr. Sebastian Lapuschkin, Head of the XAI Group of the Fraunhofer Heinrich-Hertz-Institute, Berlin, Germany
is going to guide through the development of the XAI field in recent years, and discuss current future trends for the understanding AI models and model predictions.
Sebastian Lapuschkin received the Ph.D. degree with distinction from the Berlin Institute of Technology in 2018 for his pioneering contributions to the field of eXplainable Artificial Intelligence and interpretable machine learning. From 2007 to 2013 he studied computer science (B. Sc. and M. Sc.) at the Berlin Institute of Technology, with a focus on software engineering and machine learning. Currently, he is a tenured researcher and head of the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI) in Berlin. He is recipient of multiple awards, including the Hugo-Geiger-Prize for outstanding doctoral achievement and the 2020 Pattern Recognition Best Paper Award. His current research is focused on actionable eXplainable AI for the interpretation, holistic analysis and rectification of machine learning system behavior. Further research interests include efficient machine learning and data analysis, data and algorithm visualization.
ZOOM & Password: 148148
PDF & VIDEO