BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160727Z
LOCATION:D167/174
DTSTART;TZID=America/Chicago:20181111T170000
DTEND;TZID=America/Chicago:20181111T173000
UID:submissions.supercomputing.org_SC18_sess221_ws_mlhpce122@linklings.com
SUMMARY:Training Speech Recognition Models on HPC Infrastructure
DESCRIPTION:Workshop\nApplications, Deep Learning, Machine Learning, Works
 hop Reg Pass\n\nTraining Speech Recognition Models on HPC Infrastructure\n
 \nKarkada, Saletore\n\nAutomatic speech recognition is used extensively in
  speech interfaces and spoken dialogue systems. To accelerate the developm
 ent of new speech recognition models and techniques, developers at Mozilla
  have open sourced a deep learning based Speech-To-Text engine known as pr
 oject DeepSpeech based on Baidu’s DeepSpeech research. In order to make mo
 del training time quicker on CPUs for DeepSpeech distributed training, we 
 have developed optimizations on the Mozilla DeepSpeech code to scale the m
 odel training to a large number of Intel® CPU system, including Horovod in
 tegration into DeepSpeech. We have also implemented a novel dataset partit
 ioning scheme to mitigate compute imbalance across multiple nodes of an HP
 C cluster. We demonstrate that we are able to train the DeepSpeech model u
 sing the LibriSpeech clean dataset to its state-of-the-art accuracy in 6.4
 5Hrs on 16-Node Intel® Xeon® based HPC cluster.
URL:https://sc18.supercomputing.org/presentation/?id=ws_mlhpce122&sess=ses
 s221
END:VEVENT
END:VCALENDAR

