Before you start
- This part is for students who needs MACE for their research. If you don't use MACE, then you can ignore this page.
- You should finish the orientation before proceed this tutorial. Make sure that you have installed MACE in Python venv.
MACE¶
Introduction¶
MACE (multi- atomic cluster expansion)1 is a machine learning potential (MLP)2 that trained based on first-principles density functional theory (DFT) calculations. It has a foundation model MACE-MP-0 trained based on the Materials Project database that can be used for general purpose calculations.
Foundation model
A foundation model, also known as large AI model, is a machine learning or deep learning model that is trained on broad data such that it can be applied across a wide range of use cases. You can find the foundation model (MACE-MP-0)3 for MACE here. More details about training can be found here.
MACE utilizes equivariant message passing graph neural networks (MPNNs)45 for computation. MPNNs are a type of graph neural network (GNN)6 that parametrises a mapping from a labeled graph to a target space, either a graph or a vector space. Equivariant or invariant or E(3) ensures that the model is capable of including all translations, rotations, and reflections symmetry groups. Other framework similar to MACE is NequIP.7
Workflow¶
General workflow of MACE:
flowchart TD
id1[Structure Generation]
id2[DFT Calculations]
id3[Training or Fine-tuning]
id4{Validation:<br> Small Error?}
id5[Finished]
id1 -->|Structure DB| id2
id2 -->|Structure DB with Computed Data| id3
id3 -->|Trained Model| id4
id4 -->|Yes|id5
id4 -->|No<br>Add more structures|id1
- Structure generation: Generate structures for creating a database for training. In this step, we can run molecular dynamics for creating such structure database.
- DFT calculations: Use DFT to compute energy, forces, stresses and other properties to form a database
- Training: Use
run_mace_train.pyto train or fine tune your model - Validation: Compare properties to DFT or experimental results
- Add more structures: If the dataset is too small, we need to add more data to our dataset.
Practical Tutorial¶
We will run MACE through ASE, please look at here for tutorial and installation.
-
Ilyes Batatia, Dávid Péter Kovács, Gregor N C Simm, Christoph Ortner, and Gábor Csányi. MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields. 6th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022. URL: https://proceedings.neurips.cc/paper_files/paper/2022/hash/4a36c3c51af11ed9f34615b81edb5bbc-Abstract-Conference.html. ↩
-
Oliver T. Unke, Stefan Chmiela, Huziel E. Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T. Schütt, Alexandre Tkatchenko, and Klaus-Robert Müller. Machine Learning Force Fields. Chem. Rev., 121(16):10142–10186, August 2021. URL: https://pubs.acs.org/doi/10.1021/acs.chemrev.0c01111 (visited on 2024-07-05), doi:10.1021/acs.chemrev.0c01111. ↩
-
Ilyes Batatia, Philipp Benner, Yuan Chiang, Alin M. Elena, Dávid P. Kovács, Janosh Riebesell, Xavier R. Advincula, Mark Asta, Matthew Avaylon, William J. Baldwin, Fabian Berger, Noam Bernstein, Arghya Bhowmik, Samuel M. Blau, Vlad Cărare, James P. Darby, Sandip De, Flaviano Della Pia, Volker L. Deringer, Rokas Elijošius, Zakariya El-Machachi, Fabio Falcioni, Edvin Fako, Andrea C. Ferrari, Annalena Genreith-Schriever, Janine George, Rhys E. A. Goodall, Clare P. Grey, Petr Grigorev, Shuang Han, Will Handley, Hendrik H. Heenen, Kersti Hermansson, Christian Holm, Jad Jaafar, Stephan Hofmann, Konstantin S. Jakob, Hyunwook Jung, Venkat Kapil, Aaron D. Kaplan, Nima Karimitari, James R. Kermode, Namu Kroupa, Jolla Kullgren, Matthew C. Kuner, Domantas Kuryla, Guoda Liepuoniute, Johannes T. Margraf, Ioan-Bogdan Magdău, Angelos Michaelides, J. Harry Moore, Aakash A. Naik, Samuel P. Niblett, Sam Walton Norwood, Niamh O'Neill, Christoph Ortner, Kristin A. Persson, Karsten Reuter, Andrew S. Rosen, Lars L. Schaaf, Christoph Schran, Benjamin X. Shi, Eric Sivonxay, Tamás K. Stenczel, Viktor Svahn, Christopher Sutton, Thomas D. Swinburne, Jules Tilly, Cas van der Oord, Eszter Varga-Umbrich, Tejs Vegge, Martin Vondrák, Yangshuai Wang, William C. Witt, Fabian Zills, and Gábor Csányi. A foundation model for atomistic materials chemistry. March 2024. arXiv:2401.00096 [cond-mat, physics:physics]. URL: http://arxiv.org/abs/2401.00096 (visited on 2024-06-25). ↩
-
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural Message Passing for Quantum Chemistry. In Proceedings of the 34 th International Conference on Machine Learning. Sydney, Australia, 2017. URL: https://dl.acm.org/doi/10.5555/3305381.3305512. ↩
-
Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) Equivariant Graph Neural Networks. February 2022. arXiv:2102.09844 [cs, stat]. URL: http://arxiv.org/abs/2102.09844 (visited on 2024-09-17). ↩
-
Benjamin Sanchez-Lengeling, Emily Reif, Adam Pearce, and Alexander B. Wiltschko. A Gentle Introduction to Graph Neural Networks. Distill, 6(9):e33, September 2021. URL: https://distill.pub/2021/gnn-intro (visited on 2024-09-17), doi:10.23915/distill.00033. ↩
-
Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nat Commun, 13(1):2453, May 2022. URL: https://www.nature.com/articles/s41467-022-29939-5 (visited on 2024-07-29), doi:10.1038/s41467-022-29939-5. ↩