BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:3361e756-eba9-4daf-bce2-0b83d29e6f36@support.access-ci.org
DTSTAMP:20240219T100939Z
DTSTART:20240229T160000Z
DTEND:20240301T000000Z
SUMMARY:Data Parallelism: How to Train Deep Learning Models on Multiple GPU
 s (NVIDIA Deep Learning Institute)
DESCRIPTION:Modern deep learning challenges leverage increasingly larger da
 tasets and more complex models. As a result, significant computational pow
 er is required to train models effectively and efficiently. Learning to di
 stribute data across multiple GPUs during deep learning model training mak
 es possible an incredible wealth of new applications utilizing deep learni
 ng.Additionally, the effective use of systems with multiple GPUs reduces t
 raining time, allowing for faster application development and much faster 
 iteration cycles. Teams who are able to perform training using multiple GP
 Us will have an edge, building models trained on more data in shorter peri
 ods of time and with greater engineer productivity.This workshop teaches y
 ou techniques for data-parallel deep learning training on multiple GPUs to
  shorten the training time required for data-intensive applications. Worki
 ng with deep learning tools, frameworks, and workflows to perform neural n
 etwork training, you’ll learn how to decrease model training time by dis
 tributing data to multiple GPUs, while retaining the accuracy of training 
 on a single GPU.
URL:https://support.access-ci.org/events/7359
END:VEVENT
END:VCALENDAR