Presentation
Enabling High-Level Graph Processing via Dynamic Tasking
SessionResearch Posters
Event Type
Poster
TP
EX
TimeThursday, November 15th8:30am - 5pm
LocationC2/3/4 Ballroom
DescriptionData-intensive computing yields irregular and unbalanced workloads, in particular on large-scale problems running on distributed systems. Task-based runtime systems are commonly exploited to implement higher-level data-centric programming models, promoting multithreading and asynchronous coordination for performance. However, coping with dynamic workloads (e.g., those yielded by large-scale graph processing) is challenging.
In this work, we took an exploratory approach to overcome some typical bottlenecks in tasking systems. In particular, we propose 1. a novel task allocator based on dynamic per-thread allocation and all-to-all recycling networks, and 2. a reservation-free remote spawning schema, based on receiver-side buffering and back-pressure feedback/sensing to avoid overflows.
As a proof of concept, we implemented the proposed techniques underneath a high-level library of distributed C++ containers. Preliminary experimental evaluation shows consistent scalability, a neat improvement in performance (e.g., 1.5x speedup with respect to the original code over an 8M-nodes graph), and less sensitiveness to parameter tuning.
In this work, we took an exploratory approach to overcome some typical bottlenecks in tasking systems. In particular, we propose 1. a novel task allocator based on dynamic per-thread allocation and all-to-all recycling networks, and 2. a reservation-free remote spawning schema, based on receiver-side buffering and back-pressure feedback/sensing to avoid overflows.
As a proof of concept, we implemented the proposed techniques underneath a high-level library of distributed C++ containers. Preliminary experimental evaluation shows consistent scalability, a neat improvement in performance (e.g., 1.5x speedup with respect to the original code over an 8M-nodes graph), and less sensitiveness to parameter tuning.
Archive



