Title:
|
EXPLAINING AI MODELS FOR CLINICAL RESEARCH:
VALIDATION THROUGH MODEL COMPARISON AND
DATA SIMULATION |
Author(s):
|
Qing Zeng-Treitler, Yijun Shao, Douglas Redd, Joseph Goulet, Cynthia Brandt and Bruce Bray |
ISBN:
|
978-989-8533-89-0 |
Editors:
|
Mário Macedo |
Year:
|
2019 |
Edition:
|
Single |
Keywords:
|
Explainable AI, Validation, Clinical Research |
Type:
|
Full Paper |
First Page:
|
27 |
Last Page:
|
34 |
Language:
|
English |
Cover:
|
|
Full Contents:
|
click to dowload
|
Paper Abstract:
|
For clinical research to take advantage of artificial intelligence techniques such as the various types of deep neural
networks, we need to be able to explain the deep neural network models to clinicians and researchers. While some
explanation approaches have been developed, their validation and utilization are very limited. In this study, we evaluated
a novel explainable artificial intelligence method called impact assessment by applying it to deep neural networks trained
on real world and simulated data. Using real clinical data, the impact scores from deep neural networks were compared
with odds ratios from logistic regression models. Using simulated data, the impact scores from deep neural networks
were compared with the impact scores calculated based on the ground truth (i.e. formulas used to generate the simulated
data). The correlations between impact scores and odds ratios ranged from 0.63 to 0.97. The correlations between impact
scores from DNN and ground truth ranged were all above 0.99. These suggest that the impact score provide a valid
explanation of the contribution of a variable in a DNN model. |
|
|
|
|