BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160725Z
LOCATION:D171
DTSTART;TZID=America/Chicago:20181111T090000
DTEND;TZID=America/Chicago:20181111T091000
UID:submissions.supercomputing.org_SC18_sess167_wksp134@linklings.com
SUMMARY:Introduction - ResCuE-HPC: 1st Workshop on Reproducible, Customiza
 ble, and Portable Workflows for HPC
DESCRIPTION:Workshop\nReproducibility, Software Engineering, Workflows, Wo
 rkshop Reg Pass\n\nIntroduction - ResCuE-HPC: 1st Workshop on Reproducible
 , Customizable, and Portable Workflows for HPC\n\nFursin, Gamblin, Puzovic
 , Taufer\n\nExperiment reproducibility and artifact sharing is gradually b
 ecoming a norm for publications at HPC conferences. However, our recent ex
 perience to validate experimental results during artifact evaluation (AE) 
 at PPoPP, CGO, PACT and SC also highlighted multiple problems. Ad-hoc expe
 rimental workflows, lack of common experimental methodology and tools, and
  the ever changing software/hardware stack all place a heavy burden on eva
 luators when installing, running and analyzing complex experiments. Worse 
 still, even when artifacts (e.g. benchmarks, data sets, models) are shared
 , they are typically difficult to customize, port, reuse, and build upon. 
 Establishing common experimental workflow frameworks with common formats/A
 PIs and portable package managers can considerably simplify validation and
  reuse of experimental results.\n\nThis workshop will bring together HPC r
 esearchers and practitioners interested in developing common experimental 
 methodologies, workflow frameworks and package managers for HPC (in partic
 ular, participants in CLUSTER competitions and the recent SC reproducibili
 ty initiative). We will discuss the best practices of sharing benchmarks, 
 data sets, tools and experimental results in a customizable and reusable w
 ay. We will also cover state-of-the-art in frameworks for running workload
 s on ever changing systems, their current drawbacks and future improvement
 s. Finally, we will compile a report on the current state-of-the art techn
 iques and tools which we will share with artifact evaluation committees at
  SC and other top-tier conferences, and the ACM taskforce on reproducibili
 ty where we are founding members. We believe such a practical approach wil
 l help to improve reproducibility initiative and artifact exchange at SC w
 hile accelerating hardware/software co-design of efficient HPC systems.
URL:https://sc18.supercomputing.org/presentation/?id=wksp134&sess=sess167
END:VEVENT
END:VCALENDAR

