Resources

  1. D’Amore, L.; Hahn, D. F.; Dotson, D. L.; Horton, J. T.; Anwar, J.; Craig, I.; Fox, T.; Gobbi, A.; Lakkaraju, S. K.; Lucas, X.; Meier, K.; Mobley, D. L.; Narayanan, A.; Schindler, C. E. M.; Swope, W. C.; in ’t Veld, P. J.; Wagner, J.; Xue, B.; Tresadern, G. Collaborative Assessment of Molecular Geometries and Energies from the Open Force Field. J. Chem. Inf. Model. 2022, 62 (23), 6094–6104. https://doi.org/10.1021/acs.jcim.2c01185.

=> Collaborative benchmarking: This paper provides insights into designing and conducting objective, large-scale benchmarking of force fields by leveraging collaboration between industry and academia, offering a robust framework for evaluating model performance using diverse datasets and independent validation—ideal for anyone interested in creating comprehensive, unbiased benchmarks in molecular simulations.

  1. Hahn, D. F.; Bayly, C. I.; Boby, M. L.; Macdonald, H. E. B.; Chodera, J. D.; Gapsys, V.; Mey, A. S. J. S.; Mobley, D. L.; Benito, L. P.; Schindler, C. E. M.; Tresadern, G.; Warren, G. L. Best Practices for Constructing, Preparing, and Evaluating Protein-Ligand Binding Affinity Benchmarks [Article v1.0]. Living J Comput Mol Sci 2022, 4 (1), 1497. https://doi.org/10.33011/livecoms.4.1.1497.

=> Benchmark set creation: Proposes guidelines for high-quality data curation, input preparation, and analysis to improve predictiveness, alongside an open-source, standardized benchmark set and toolkit for community use.

  1. Boothroyd, S.; Behara, P. K.; Madin, O. C.; Hahn, D. F.; Jang, H.; Gapsys, V.; Wagner, J. R.; Horton, J. T.; Dotson, D. L.; Thompson, M. W.; Maat, J.; Gokey, T.; Wang, L.-P.; Cole, D. J.; Gilson, M. K.; Chodera, J. D.; Bayly, C. I.; Shirts, M. R.; Mobley, D. L. Development and Benchmarking of Open Force Field 2.0.0: The Sage Small Molecule Force Field. J. Chem. Theory Comput. 2023, 19 (11), 3251–3275. https://doi.org/10.1021/acs.jctc.3c00039.

=> Method creation and benchmarking: The Open Force Field 2.0.0 (Sage) introduces enhanced parameters and validation, demonstrating improved accuracy across diverse molecular benchmarks and compatibility with AMBER biopolymer fields, with publicly available data and methods for reproducibility.

  1. Cavender, C. E.; Case, D. A.; Chen, J. C.-H.; Chong, L. T.; Keedy, D. A.; Lindorff-Larsen, K.; Mobley, D. L.; Ollila, O. H. S.; Oostenbrink, C.; Robustelli, P.; Voelz, V. A.; Wall, M. E.; Wych, D. C.; Gilson, M. K. Structure-Based Experimental Datasets for Benchmarking of Protein Simulation Force Fields. arXiv March 2, 2023. https://doi.org/10.48550/arXiv.2303.11056.

=> Benchmarking datasets: This review highlights NMR and room temperature crystallography datasets for benchmarking protein force fields, offering computational researchers practical guidance on leveraging these experimental data to assess simulation accuracy.

  1. Hahn, D. F.; Gapsys, V.; Groot, B. L. de; Mobley, D. L.; Tresadern, G. J. Current State of Open Source Force Fields in Protein-Ligand Binding Affinity Predictions. ChemRxiv August 29, 2023. https://doi.org/10.26434/chemrxiv-2023-ml7gd.

=> This paper provides a comprehensive evaluation of different force fields in predicting binding affinities, highlights factors beyond force field choice that affect prediction accuracy (such as input preparation and sampling convergence), and offers a consensus approach to improve results, all supported with accessible data and tools for replication and further benchmarking analysis.

  1. Ross, G. A.; Lu, C.; Scarabelli, G.; Albanese, S. K.; Houang, E.; Abel, R.; Harder, E. D.; Wang, L. The Maximal and Current Accuracy of Rigorous Protein-Ligand Binding Free Energy Calculations. ChemRxiv October 13, 2023. https://doi.org/10.26434/chemrxiv-2022-p2vpg-v2.

=> This study evaluates the accuracy of free energy perturbation (FEP) for predicting binding affinities, using a large dataset to show that, with careful preparation, FEP can achieve accuracy close to experimental limits, and it outlines reliable protocols to enhance FEP’s predictive power in drug discovery.

  1. Koehler Leman, J.; Lyskov, S.; Lewis, S. M.; Adolf-Bryfogle, J.; Alford, R. F.; Barlow, K.; Ben-Aharon, Z.; Farrell, D.; Fell, J.; Hansen, W. A.; Harmalkar, A.; Jeliazkov, J.; Kuenze, G.; Krys, J. D.; Ljubetič, A.; Loshbaugh, A. L.; Maguire, J.; Moretti, R.; Mulligan, V. K.; Nance, M. L.; Nguyen, P. T.; Ó Conchúir, S.; Roy Burman, S. S.; Samanta, R.; Smith, S. T.; Teets, F.; Tiemann, J. K. S.; Watkins, A.; Woods, H.; Yachnin, B. J.; Bahl, C. D.; Bailey-Kellogg, C.; Baker, D.; Das, R.; DiMaio, F.; Khare, S. D.; Kortemme, T.; Labonte, J. W.; Lindorff-Larsen, K.; Meiler, J.; Schief, W.; Schueler-Furman, O.; Siegel, J. B.; Stein, A.; Yarov-Yarovoy, V.; Kuhlman, B.; Leaver-Fay, A.; Gront, D.; Gray, J. J.; Bonneau, R. Ensuring Scientific Reproducibility in Bio-Macromolecular Modeling via Extensive, Automated Benchmarks. Nat Commun 2021, 12 (1), 6947. https://doi.org/10.1038/s41467-021-27222-7.

=> This paper presents a reproducibility framework with automated, continuous benchmarking for macromolecular modeling, offering the scientific community a scalable, documented approach to enhance scientific reproducibility across computational environments. Over 40 scientific benchmarks were implemented on top of this framework.