ISSN 2071-8594

Russian academy of sciences


Gennady Osipov

L. V. Utkin, A. A. Meldo, M. S. Kovalev, E. M. Kasimov A Review of Methods for Explaining and Interpreting Decisions of Intelligent Cancer Diagnosis Systems


The paper presents a review of methods for explaining and interpreting the classification results provided by various machine learning models. A general classification of the interpretation and explanation methods is given depending on a type of interpreted models. Main approaches and examples of explanation methods in medicine and, in particular, in oncology, are considered. A general scheme of the explainable intelligence subsystem is proposed, which allows to implement explanations by means of the natural language.


machine learning, explainable intelligence, interpretation, oncology, deep neural net-works, intelligent diagnostic system.

PP. 55-65.

DOI 10.14357/20718594200406


1. Adadi A., Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160, 2018.
2. Arrieta A.B., Diaz-Rodriguez N., Del Ser J.D., et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, arXiv:1910.10045, Oct. 2019.
3. Guidotti R., Monreale A., Ruggieri S., et al. A survey of methods for explaining black box models, ACM Computing Surveys 51(5):1-42, 2019.
4. Molnar C. Interpretable Machine Learning - A Guide for Making Black Box Models Explainable. 2019.
5. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. 1:206-215. 2019.
6. X.Liu, Hou F., Qind H., Hao A. Multi-view multi-scale CNNs for lung nodule type classification from CT images. Pattern Recognition, 77, 262-275, 2018.
7. Bennetot A., Laurent J.-L., Chatila R., Diaz-Rodriguez N. Towards explainable neural-symbolic visual reasoning, arXiv:1909.09065, Oct. 2019.
8. Zhang Q., Zhu S.C. Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering. 19(1), 27-39. 2018.
9. Hendricks L.A., Hu R., Darrell T., Akata Z. Grounding visual explanations. Proceedings of the European Conference on Computer Vision (ECCV), pp. 264-279, 2018.
10. Qi Z., Khorram S., Li F. Embedding Deep Networks into Visual Explanations, arXiv:1709.05360, Sep, 2018.
11. Wang J., Gou L., Zhang W., et al. DeepVID: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation. IEEE Transactions on Visualization and Computer Graphics 25(6), 2168-2180, 2019.
12. Ribeiro M.T., Singh S., Guestrin C. Why should I trust you? Explaining the predictions of any classifier. arXiv:1602.04938, Aug 2016.
13. Lundberg S.M., Lee S.-I. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, pp. 4765-4774, 2017.
14. Strumbel E., Kononenko I. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1–18, 2010.
15. Fong R.C., Vedaldi A. Interpretable explanations of black boxes by meaningful perturbation, Proceedings of the IEEE International Conference on Computer Vision, IEEE, pp. 3429-3437, 2017.
16. Chapman-Rounds M., Schulz M.-A., Pazos E., et al. EMAP: Explanation by minimal adversarial perturbation, arXiv:1912.00872, Dec., 2019.
17. Dhurandhar A., Chen P.-Y., Luss R., et al. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. arXiv:1802.07623v2, Oct 2019.
18. Dhurandhar A., Pedapati T., Balakrishnan A., et al. 2019. Model agnostic contrastive explanations for structured data. arXiv:1906.00117, 2019.
19. Van Looveren A.V., Klaise J. Interpretable counterfactual explanations guided by prototypes, arXiv:1907.02584, Jul, 2019.
20. Vu M.N., Nguyen T.D., et al. Evaluating Explainers via Perturbation. arXiv:1906.02032v1, Jun 2019.
21. Ming Y., Panpan Xu P., Qu H., Ren L. Interpretable and steerable sequence learning via prototypes. arXiv:1907.09728, Jul, 2019.
22. Mittelstadt B., Russell C., Wachter S. Explaining explanations in AI. arXiv:1811.01439, Nov., 2018.
23. Sokol K., Flach P.A. Counterfactual explanations of machine learning predictions: Opportunities and challenges for AI safety. SafeAI@AAAI, CEUR Workshop Proceedings, v. 2301,, pp. 1-4, 2019.
24. Goyal Y., Wu Z., Ernst J., et al. Counterfactual visual explanations, arXiv:1904.07451, Apr, 2019.
25. Wachter S., Mittelstadt B., Russell C. Counterfactual explanations without opening the black box: Automated decisions and the GPDR, Harvard Journal of Law & Technology, 31, 841-887, 2017.
26. Koh P.W., Ang K.-S., Teo H.H., et al. On the accuracy of influence functions for measuring group effects. arXiv:1905.13289. 2019.
27. Koh P.W., Liang P. Understanding black-box predictions via influence functions. Proceedings of the 34th International Conference on Machine Learning, volume 70, 1885–1894. 2017.
28. Melis M., Demontis A., Pintor M., et al. Secml: A Python library for secure and explainable machine learning, arXiv:1912.10013, Dec 2019.
29. Kann B.H., Thompson R., Thomas Jr C.R., et al. Artificial intelligence in oncology: current applications and future directions. Oncology, 33(2): 46-53, 2019.
30. Xie Y., Gao G., Chen X. Outlining the Design Space of Explainable Intelligent Systems for Medical Diagnosis. arXiv:1902.06019, Feb 2019.
31. Tonekaboni S., Joshi S., McCradden M.D., et al. What clinicians want: Contextualizing explainable machine learning for clinical end use, arXiv:1905.05134v2, Aug 2019.
32. Vellido A. The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Computing and Applications, 1-15, 2019.
33. Holzinger A., Langs G., Denk H., et al. Causability and explainability of artificial intelligence in medicine, WIREs Data Mining and Knowledge Discovery, 9(4):e1312, 1-13, 2019.
34. Holzinger A., Malle B., Kieseberg P., et al. Towards the augmented pathologist: Challenges of explainable AI in digital pathology. arXiv:1712.06657, Dec 2017.
35. Holzinger A., Biemann C., Pattichis C.S., et al. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923, Dec 2017.
36. Fellous J.-M., Sapiro G., Rossi A., et al. Explainable artificial intelligence for neuroscience: behavioral neurostimulation, Frontiers in Neuroscience, 13(1346), 1-14, 2019.
37. Horst F., Slijepcevic D., Lapuschkin S., et al. On the understanding and interpretation of machine learning predictions in clinical gait analysis using explainable artificial intelligence, arXiv:1912.07737, Dec, 2019.
38. Bach S., Binder A., Montavon G., et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10:e0130140, 1-46, 2015.
39. Bohle M., Eitel F., Weygandt M., Ritter K. Layer-wise relevance propagation for explaining deep neural network decisions in mri-based alzheimer's disease classification, Frontiers in Aging Neuroscience, 11(Article 194), 1-17, 2019.
40. Lundberg S.M., Nair B., Vavilala M.S. Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery. bioRxiv 206540. Dec 2017.
41. Schulz M.-A., Chapman-Rounds M., Verma M., et al. Clusters in explanation space: Inferring disease subtypes from model explanations. arXiv:1912.08755, Dec 2019.
42. Schetinin V., Fieldsend J.E., Partridge D., et al. Confident interpretation of bayesian decision tree ensembles for clinical applications. IEEE Trans. on Information Technology in Biomedicine, 11(3):312–319, 2007.
43. Graziani M., Andrearczyk V., Muller H. Regression concept vectors for bidirectional explanations in histopathology. Understanding and Interpreting Machine Learning in Medical Image Computing Applications. – Springer, Cham, 2018. – С. 124-132.
44. Karim Md.R., Cochez M., Beyan O., et al. Onconetexplainer: explainable predictions of cancer types based on gene expression data. arXiv preprint arXiv:1909.04169. Sep 2019.
45. Etmann C., Schmidt M., Behrmann J., et al. Deep Relevance Regularization: Interpretable and Robust Tumor Typing of Imaging Mass Spectrometry Data. arXiv:1912.05459, Dec 2019.
46. Shen S., Han S.X., Aberle D.R., et al. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification, Expert Systems with Applications, 128, 84-95, 2019.
47. Van Molle P., De Strooper M., Verbelen T., et al. Visualizing convolutional neural networks to improve decision support for skin lesion classification, Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Springer, Cham, pp. 115-123, 2018.
48. Lamy J.-B., Sekar B., Guezennec G., et al. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach, Artificial Intelligence in Medicine, 94, 42-53, 2019.
49. Zhang Z., Xie Y., Xing F., et al. MDNet: A semantically and visually interpretable medical image diagnosis network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6428-6436, 2017.
50. Yamamoto Y., Tsuzuki T., Akatsuka J., et al. Automated acquisition of explainable knowledge from unannotated histopathology images. Nature Communications, 10(5642), 1-9, 2019.