Validating the validation: reanalyzing a large-scale comparison of deep learning and machine learning models for bioacti
- PDF / 3,908,694 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 74 Downloads / 211 Views
Validating the validation: reanalyzing a large‑scale comparison of deep learning and machine learning models for bioactivity prediction Matthew C. Robinson1 · Robert C. Glen2,3 · Alpha A. Lee1 Received: 27 May 2019 / Accepted: 22 December 2019 © The Author(s) 2020
Abstract Machine learning methods may have the potential to significantly accelerate drug discovery. However, the increasing rate of new methodological approaches being published in the literature raises the fundamental question of how models should be benchmarked and validated. We reanalyze the data generated by a recently published large-scale comparison of machine learning models for bioactivity prediction and arrive at a somewhat different conclusion. We show that the performance of support vector machines is competitive with that of deep learning methods. Additionally, using a series of numerical experiments, we question the relevance of area under the receiver operating characteristic curve as a metric in virtual screening. We further suggest that area under the precision–recall curve should be used in conjunction with the receiver operating characteristic curve. Our numerical experiments also highlight challenges in estimating the uncertainty in model performance via scaffold-split nested cross validation.
Introduction Computational approaches to drug discovery are often justified as necessary due to the prohibitive time and cost of experiments. Unfortunately, many papers fail to sufficiently prove that the proposed, novel techniques are actually an advance on current approaches when applied to realistic drug discovery programs. Models are often shown to work in situations differing greatly from reality, producing impressive metrics that differ greatly from the quantity of interest. Electronic supplementary material The online version of this article (doi:https://doi.org/10.1007/s10822-019-00274-0) contains supplementary material, which is available to authorized users. * Alpha A. Lee [email protected] 1
Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, UK
2
The Centre for Molecular Informatics, Department of Chemistry, University of Cambridge, Cambridge CB21EW, UK
3
Computational and Systems Medicine, Department of Metabolism, Digestion and Reproduction, Faculty of Medicine, Imperial College, South Kensington, London SW72AZ, UK
It is then often the time and cost of properly implementing and testing these proposed techniques against existing methods that becomes prohibitive for the practitioner. There is also the significant opportunity cost if models prove to be inaccurate and misdirect resources. These concerns are not new to the field of computational chemistry. Walters [1], Landrum and Stiefel [2], and others have previously critiqued the state of the literature, even referring to many papers as “advertisements”. Furthermore, Nicholls has provided useful overviews of statistical techniques for uncertainty quantification and method comparison [3–5]. Recent works provided an important review on the importance of ev
Data Loading...