BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082409Z
LOCATION:HG D 1.2
DTSTART;TZID=Europe/Stockholm:20240604T173000
DTEND;TZID=Europe/Stockholm:20240604T180000
UID:submissions.pasc-conference.org_PASC24_sess124_msa149@linklings.com
SUMMARY:3S in Distributed Graph Neural Networks: Sparse Communication, Sam
 pling, and Scalability
DESCRIPTION:Minisymposium\n\nAydin Buluc (Lawrence Berkeley National Labor
 atory, UC Berkeley); Alok Tripathy (UC Berkeley); and Katherine Yelick (UC
  Berkeley, Lawrence Berkeley National Laboratory)\n\nThis talk will focus 
 on distributed-memory parallel algorithms for graph neural network (GNN) t
 raining. We will first focus on utilizing sparse matrix primitives to para
 llelize mini-batch training based on node-wise and layer-wise sampling. Th
 en, we will illustrate techniques that are based on sparsity-aware sparse 
 matrix times dense matrix multiplication algorithms to accelerate both ful
 l-graph and mini-batch sampling based training.\n\nDomain: Computational M
 ethods and Applied Mathematics\n\nSession Chairs: Dimosthenis Pasadakis (U
 niversità della Svizzera italiana) and Olaf Schenk (Università della Svizz
 era italiana, ETH Zurich)
END:VEVENT
END:VCALENDAR
