Dec 13, 2016
Jolley Hall, Room 309
"Easier Parallel Programming with Provably-Efficient Runtime Schedulers"
Advisor: Kunal Agrawal
Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance to multicore architectures. However, utilizing this computational power has proved challenging for software developers. Many concurrency platforms and languages have emerged to address parallel programming challenges, yet writing correct and performant parallel code retains a reputation of being one of the hardest tasks a programmer can undertake. One of the biggest difficulties is synchronizing access to shared memory.
This dissertation will study how runtime scheduling systems can be utilized to make parallel programming easier. We address the difficulty in writing parallel data structures, automatically finding shared memory bugs, and reproducing non-deterministic synchronization bugs. Each of the systems presented depends on a novel runtime system. We show that each runtime provides theoretical performance guarantees and perform well in practice. The proposed work aims to extend work on finding shared memory bugs to a larger class of computations.