Event

Learning from Mutual Explanations for Cooperative Decision Making in Medicine

  • Conférencier  Prof. Ute Schmid, University Bamberg, Germany

  • Lieu

    Online

    LU

Absract

Medical decision making is one of the most relevant real world domains where intelligent support is necessary to help human experts master the ever growing complexity. At the same time, standard approaches of data driven black box machine learning are not recommendable since medicine is a highly sensitive domain where errors may have fatal consequences. In the talk, we will advocate interactive machine learning from mutual explanations to overcome typical problems of purely data driven approaches to machine learning. Mutual explanations, realised with the help of an interpretable machine learning approach, allow to incorporate expert knowledge in the learning process and support the correction of erroneous labels as well as dealing with noise. Mutual explanations therefore constitute a framework for explainable, comprehensible and correctable classification. Specifically, we present an extension of the inductive logic programming system Aleph which allows for interactive learning. We introduce our application LearnWithME which is based on this extension. LearnWithME gets input from a classifier such as a Convolutional Neural Net‘s prediction on medical images. Medical experts can ask for verbal explanations in order to evaluate the prediction. Through interaction with the verbal statements they can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge is taken into account in form of constraints for model adaptation.

Speaker

Ute Schmid is professor for Cognitive Systems at University of Bamberg. She holds a diploma in psychology and a diploma in computer science, both from Technical University Berlin (TUB), Germany. She received both her doctoral degree (Dr. rer.nat.) and her habilitation from the Department of Computer Science of TUB. Her research focus is on interpretable and human-like machine learning, inductive programming, and multimodal explanations. Current research projects are on interpretable and explainable machine learning for medical image diagnosis (BMBF – TraMeExCo), for facial expression analysis (DFG – PainFaceReader), and for detecting irrelevant digital objects (DFG – Dare2Del in the priority program Intentional Forgetting). Ute Schmid is a fortiss resesarch fellow for Inductive Programming. She is engaged to bring AI education to school and holds many outreach talks to give a realistic picture of the benefits and risks of AI applications.