Parallelism is the key to achieving high performance incomputing. However, writing efficient and scalable parallel programsis notoriously difficult, and often requires significant expertise.To address this challenge, it is crucial to provide programmers withhigh-level tools to enable them to develop solutions easily, and atthe same time emphasize the theoretical and practical aspects ofalgorithm design to allow the solutions developed to run efficientlyunder many different settings. This book, a revised version of thethesis that won the 2015 ACM Doctoral Dissertation Award, addressesthis challenge using a three-pronged approach consisting of the designof shared-memory programming techniques, frameworks, and algorithmsfor important problems in computing. It provides evidence that... withappropriate programming techniques, frameworks, and algorithms,shared-memory programs can be simple, fast, and scalable, both intheory and in practice. The results serve to ease the transition intothe multicore era.(#br)The book starts by introducing tools and techniques for deterministicparallel programming, including means for encapsulating nondeterminismvia powerful commutative building blocks, as well as a novel frameworkfor executing sequential iterative loops in parallel, which lead todeterministic parallel algorithms that are efficient both in theoryand in practice. The book then introduces Ligra, the first high-levelshared-memory framework for parallel graph traversal algorithms. Theframework enables short and concise implementations that deliverperformance competitive with that of highly optimized code and up toorders of magnitude faster than previous systems designed fordistributed memory. Finally, the book bridges the gap between theoryand practice in parallel algorithm design by introducing the firstalgorithms for a variety of important problems on graphs and stringsthat are both practical and theoretically efficient.