BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082409Z
LOCATION:HG E 1.1
DTSTART;TZID=Europe/Stockholm:20240604T123000
DTEND;TZID=Europe/Stockholm:20240604T130000
UID:submissions.pasc-conference.org_PASC24_sess146_msa236@linklings.com
SUMMARY:Transferring a Molecular Foundation Model for Polymer Property Pre
 dictions
DESCRIPTION:Minisymposium\n\nPei Zhang, Logan Kearney, Debsindhu Bhowmik, 
 Zachary Fox, Amit Naskar, and John Gounley (Oak Ridge National Laboratory)
 \n\nTransformer-based large language models have remarkable potential to a
 ccelerate design optimization for applications such as drug development an
 d material discovery. Self-supervised pretraining of transformer models re
 quires large-scale data sets, which are often sparsely populated in topica
 l areas such as polymer science. State-of-the-art approaches for polymers 
 conduct data augmentation to generate additional samples but unavoidably i
 ncur extra computational costs. In contrast, large-scale open-source data 
 sets are available for small molecules and provide a potential solution to
  data scarcity through transfer learning. In this presentation, we discuss
  using transformers pretrained on small molecules and fine-tuned on polyme
 r properties. We find that this approach achieves comparable accuracy to t
 hose trained on augmented polymer data sets for a series of benchmark pred
 iction tasks.\n\nDomain: Chemistry and Materials\n\nSession Chair: John Go
 unley (Oak Ridge National Laboratory)
END:VEVENT
END:VCALENDAR
