Dec 7, 2018
Lopata Hall, Room 101
"A NUMA-Aware Provably-Efficient Task-Parallel Platform Based on the Work-First Principle"
Adviser: Angelina Lee
Task parallelism is designed to simplify the task of parallel programming. When executing a task parallel program on modern NUMA architectures, it can fail to scale due to the phenomenon called work inflation, where the overall processing time that multiple cores spend on doing useful work is higher compared to the time required to do the same amount of work on one core, due to effects experienced only during parallel executions such as additional cache misses, remote memory accesses, and memory bandwidth issues.
One can mitigate work inflation by co-locating the computation with its data, but this is nontrivial to do with task parallel programs. First, by design, the scheduling for task parallel pro- grams is automated, giving the user little control over where the computation is performed. Second, the platforms tend to employ work stealing, which provides strong theoretical guarantees, but its randomized protocol for load balancing does not discern between work items that are far away versus ones that are closer.
In this work, we propose NUMA-WS, a NUMA-aware task parallel platform engineered based on the work-first principle. By abiding by the work-first principle, we are able to obtain a platform that is work efficient, provides the same theoretical guarantees as a classic work stealing scheduler, and mitigates work inflation. We have extended Cilk Plus runtime system to implemented NUMA-WS. Empirical results indicate that the NUMA-WS is work efficient and can provide better scalability by mitigating work inflation.
"Proactive Work Stealing for Futures"
The use of futures is a flexible way to express parallelism, and is a natural extension to fork-join parallelism. However, this flexibility comes at the cost of additional execution overhead. Prior work has focused on adding futures to fork-join parallelism by remaining within the confines of the parsimonious work-stealing scheduling paradigm, in which worker threads never steal work unless their own work queues are empty. We propose a proactive work-stealing scheduling algorithm where this prerequisite for stealing work is not maintained when scheduling futures. At first glance this may seem counter-intuitive, but we show that this has provable advantages over the traditional method. We also implemented our scheduler and found its performance to be comparable to that of traditional fork-join schedulers.