BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082409Z
LOCATION:HG E 1.1
DTSTART;TZID=Europe/Stockholm:20240604T110000
DTEND;TZID=Europe/Stockholm:20240604T130000
UID:submissions.pasc-conference.org_PASC24_sess146@linklings.com
SUMMARY:MS3C - Scalable Machine Learning and Generative AI for Materials D
 esign
DESCRIPTION:Minisymposium\n\nThe design and discovery of materials with de
 sired functional properties is challenging due to labor-intensive experime
 ntal measurements and computationally expensive physics-based models, whic
 h preclude a thorough exploration of large chemical spaces characterized b
 y several chemical compositions and atomic configurations per composition.
  This disconnect has motivated the development of data-driven surrogate mo
 dels that can overcome experimental and computational bottlenecks to enabl
 e an effective exploration of such vast chemical spaces. In this minisympo
 sium, we discuss new generative artificial intelligence (AI) methods to pe
 rform materials design. A particular advantage of generative AI approaches
  is their ability to learn the context and syntax of molecular data descri
 bed by fundamental principles of physics and chemistry, providing a critic
 al basis for informing the generative design of molecules. In order to ens
 ure generalizability and robustness of the generative model, the generativ
 e AI model needs to be trained on a large volume of data that thoroughly s
 amples diverse chemical regions. Due to the large volumes of data that mus
 t be processed, efficiently training these models requires leveraging a ma
 ssive amount of high performance computing (HPC) resources for scalable tr
 aining. This minisymposium aims to broadly cover HPC aspects for scalable 
 generative AI models across several heterogeneous distributed computationa
 l environments.\n\nEfficient Training of GNN-based Material Science Applic
 ations at Scale: An Orchestration of Data Movement Approach\n\nScalable da
 ta management techniques are crucial to effectively processing large volum
 es of scientific data on HPC platforms for distributed deep learning (DL) 
 model training. Because of the need to access data randomly and frequently
  in stochastic optimizers, in-memory distributed storage that keeps...\n\n
 \nJonghyun Bae (Lawrence Berkeley National Laboratory); Jong Youl Choi, Ma
 ssimiliano Lupo Pasini, and Kshitij Mehta (Oak Ridge National Laboratory);
  Khaled Ibrahim (Lawrence Berkeley National Laboratory); and Pei Zhang (Oa
 k Ridge National Laboratory)\n---------------------\nLarge Language Models
  and Agentic Systems for Bio-Inspired Materials Design\n\nFrom seashells t
 o mammal hooves to plant stems, biological materials have long captivated 
 materials scientists and mechanical engineers due to their impressive hier
 archical structure-property relationships. By understanding biological ins
 ights and motifs, the design of bio-inspired materials is empo...\n\n\nRac
 hel Luu and Markus Buehler (Massachusetts Institute of Technology)\n------
 ---------------\nTransferring a Molecular Foundation Model for Polymer Pro
 perty Predictions\n\nTransformer-based large language models have remarkab
 le potential to accelerate design optimization for applications such as dr
 ug development and material discovery. Self-supervised pretraining of tran
 sformer models requires large-scale data sets, which are often sparsely po
 pulated in topical areas ...\n\n\nPei Zhang, Logan Kearney, Debsindhu Bhow
 mik, Zachary Fox, Amit Naskar, and John Gounley (Oak Ridge National Labora
 tory)\n---------------------\nHydraGNN: Scalable Machine Learning and Gene
 rative AI for Accelerating Materials Design\n\nWe discuss the challenges i
 nvolved in developing large-scale training for generative AI models aimed 
 at material design. We employ HydraGNN, a scalable graph neural network (G
 NN) framework, alongside DDStore, a distributed in-memory data store, to f
 acilitate large-scale data distribution across the ...\n\n\nJong Youl Choi
 , Massimiliano Lupo Pasini, Pei Zhang, and Kshitij Mehta (Oak Ridge Nation
 al Laboratory) and Jonghyun Bae and Khaled Ibrahim (Lawrence Berkeley Nati
 onal Laboratory)\n\nDomain: Chemistry and Materials\n\nSession Chair: John
  Gounley (Oak Ridge National Laboratory)
END:VEVENT
END:VCALENDAR
