BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160909Z
LOCATION:D171
DTSTART;TZID=America/Chicago:20181111T090000
DTEND;TZID=America/Chicago:20181111T123000
UID:submissions.supercomputing.org_SC18_sess167@linklings.com
SUMMARY:ResCuE-HPC: 1st Workshop on Reproducible, Customizable, and Portab
 le Workflows for HPC
DESCRIPTION:Workshop\nReproducibility, Software Engineering, Workflows, Wo
 rkshop Reg Pass\n\nWorkshop Morning Break\n\n\n\n---------------------\nOp
 en Panel: Automating Artifact Sharing, Evaluation, and Reuse\n\nFursin, Ga
 mblin, Taufer, Heroux, Harrell\n\n---------------------\nKeynote\n\nHeroux
 \n\n---------------------\nIntroduction - ResCuE-HPC: 1st Workshop on Repr
 oducible, Customizable, and Portable Workflows for HPC\n\nFursin, Gamblin,
  Puzovic, Taufer\n\nExperiment reproducibility and artifact sharing is gra
 dually becoming a norm for publications at HPC conferences. However, our r
 ecent experience to validate experimental results during artifact evaluati
 on (AE) at PPoPP, CGO, PACT and SC also highlighted multiple problems. Ad-
 hoc experimental workflo...\n\n---------------------\nSemantically Organiz
 ed Containers for Reproducible Research\n\nYoungdahl, Yuan, Ton-That, Mali
 k, Jimenez...\n\nExperiments are a key component in systems and HPC-relate
 d research. They help validate new ideas and concepts. Sharing and reprodu
 cing experiments, however, is a challenge, especially when computational e
 xperiments reside in multiple computing environments, are disorganized int
 o multiple directorie...\n\n---------------------\nSupporting Thorough Art
 ifact Evaluation with Occam\n\nOliveira, Wilkinson, Mossé, Childers\n\nEff
 orts such as Artifact Evaluation (AE) have been growing, gradually making 
 software evaluation an integral part of scientific publication.  In this p
 aper, we describe how Occam can help to mitigate some of the challenges fa
 ced by both authors and reviewers. For authors, Occam provides the means t
 o...\n\n---------------------\nSpotting Black Swans With Ease: The Case fo
 r a Practical Reproducibility Platform\n\nJimenez, Maltzahn\n\nAdvances in
  agile software delivery methodologies and tools (commonly referred to as 
 _DevOps_) have not yet materialized in academic scenarios such as universi
 ty, industry and government laboratories. In this position paper, we make 
 the case for _Black Swan_, a platform for the agile implementation,...\n\n
 ---------------------\nConsidering the Development Workflow to Achieve Rep
 roducibility with Variation\n\nMercier, Faure, Richard\n\nThe ability to r
 eproduce an experiment is fundamental in computer science.  Existing appro
 aches focus on repeatability, but this is only the first step to reproduci
 bility: continuing a scientific work from a previous experiment requires b
 eing able to modify it. This ability is called reproducibility...\n\n-----
 ----------------\nAssessing Reproducibility: An Astrophysical Example of C
 omputational Uncertainty in the HPC Context\n\nStodden, Krafczyk\n\nWe pre
 sent an experiment using the Enzo simulation code on NCSA's Blue Waters sy
 stem to highlight the importance of computational and numerical uncertaint
 y in scientific computing on HPC systems. We quantify the (surprising) var
 iability of outputs from 200 identical simulation runs. We make two reco..
 .\n
END:VEVENT
END:VCALENDAR

