Carefully controlling the morphology of films/nanostructures during epitaxy is of paramount importance, particularly in view of quantum applications. Yet, this goal is often achiveved based on the grower experience which, in turns, is built largely on trial and errors. There are two reasons for this: first, growth is a complex process and oversimplified models simply do not work. Second, time scales are “human” (minutes, hours), and, as such, typically not reachable by computational approaches good enough to yield a sufficiently good description of growth. Here we shall discuss how state-of-the-art machine learning (ML) approaches can be used to reduce the gap between experiments and theory by following two different procedures. In the first one a physical quantity, i.e. che elastic chemical potential, is learned from a large database of expensive finite element method calculations, and plugged into a model tackling Ge/Si Stransky-Krastanow growth. This allows us not only to achieve a strong temporal speed up, but also to extend the lateral dimensions of the simulation cell [1,2]. In the second application, instead, evolution is learned directly from simulation snaphsots. Examples will be shown for different kind of relevant evolutions, showing how recurrent neural-network architectures open the possibility to extrapolate outside the sampled database while keeping an accurate description of the physics [3,4]. While all examples above refer to deterministic dynamics, we shall also take a glimpse on our more recent results, tackling fluctuations via Generative Adversarial Newtorks [5]. The talk shall end with a final, critical discussion on the possibility to use experimental data to directly train ML models. [1] D. Lanzoni, ..., and F. Montalenti, APL Mach. Learn. 2, 036108 (2024). [2] L. Martin-Encinar, ..., and F. Montalenti, Comp. Mater. Sci. 249, 113657 (2025). [3] D. Lanzoni, ..., and F. Montalenti, Phys. Rev. Mat. 6, 103801 (2022). [4] D. Lanzoni, ..., and F. Montalenti, Mach. Learn. Sci. Technol 5, 045017 (2024). [5] D. Lanzoni, ..., and F. Montalenti, J. Chem. Phys. 159, 144109 (2023).
Lanzoni, D., Fantasia, A., Rovaris, F., Bergamaschini, R., Montalenti, F. (2025). Progressing strained layer growth by deep learning. In Atti del convegno.
Progressing strained layer growth by deep learning
Lanzoni, D;Fantasia, A;Rovaris, F;Bergamaschini, R;Montalenti, F
2025
Abstract
Carefully controlling the morphology of films/nanostructures during epitaxy is of paramount importance, particularly in view of quantum applications. Yet, this goal is often achiveved based on the grower experience which, in turns, is built largely on trial and errors. There are two reasons for this: first, growth is a complex process and oversimplified models simply do not work. Second, time scales are “human” (minutes, hours), and, as such, typically not reachable by computational approaches good enough to yield a sufficiently good description of growth. Here we shall discuss how state-of-the-art machine learning (ML) approaches can be used to reduce the gap between experiments and theory by following two different procedures. In the first one a physical quantity, i.e. che elastic chemical potential, is learned from a large database of expensive finite element method calculations, and plugged into a model tackling Ge/Si Stransky-Krastanow growth. This allows us not only to achieve a strong temporal speed up, but also to extend the lateral dimensions of the simulation cell [1,2]. In the second application, instead, evolution is learned directly from simulation snaphsots. Examples will be shown for different kind of relevant evolutions, showing how recurrent neural-network architectures open the possibility to extrapolate outside the sampled database while keeping an accurate description of the physics [3,4]. While all examples above refer to deterministic dynamics, we shall also take a glimpse on our more recent results, tackling fluctuations via Generative Adversarial Newtorks [5]. The talk shall end with a final, critical discussion on the possibility to use experimental data to directly train ML models. [1] D. Lanzoni, ..., and F. Montalenti, APL Mach. Learn. 2, 036108 (2024). [2] L. Martin-Encinar, ..., and F. Montalenti, Comp. Mater. Sci. 249, 113657 (2025). [3] D. Lanzoni, ..., and F. Montalenti, Phys. Rev. Mat. 6, 103801 (2022). [4] D. Lanzoni, ..., and F. Montalenti, Mach. Learn. Sci. Technol 5, 045017 (2024). [5] D. Lanzoni, ..., and F. Montalenti, J. Chem. Phys. 159, 144109 (2023).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


