Simulation / Modeling / Design

What are Your Favorite Parallel Programming References?

Recently a colleague asked if I could recommend a good parallel programming textbook. Since it isn’t the first time I’ve been asked this question, and it comes up from time to time, I thought it would be interesting to share the question with you, dear reader.

What are your favorite books about parallel programming (or parallel computing)? While we’re at it, why stop at books? What are your favorite parallel programming papers, journals, or magazines? Where do you look online to learn about parallel programming (websites, blogs, podcasts, etc)?

Maybe I shouldn’t admit this, but I don’t actually have a general parallel computing textbook on my shelf! Many graduate level parallel computing courses don’t require a specific textbook, instead requiring reading from a variety of sources. The course I took in graduate school took this approach (this course is a recent version; you may want to look at the reading list). I still sometimes refer to the PRAM Algorithms handout, written by Professors Prins and Chatterjee, that we used in the course.  I would recommend it to anyone interested in a theoretical algorithms background for their CUDA programming, since the per-thread-block CUDA programming model is not all that different from the CRCW PRAM model.

There are definitely seminal papers that I recommend reading. “Data Parallel Algorithms”, by Danny Hillis and Guy Steele, is a classic introduction to the topic. Hillis’ Ph.D. dissertation “The Connection Machine” (published as a book by the MIT Press, now out of print) is also a fascinating read, even more so because many of the ideas that originated in the days of the Connection Machine are applicable to programming GPUs. Guy Blelloch’s “Prefix Sums and their Applications” provides a useful introduction to prefix sum (also known as “scan”), one of the most important data parallel primitive algorithms. My collaborators and I have relied on this paper and Blelloch’s dissertation, “Vector Models for Data-Parallel Computing”, for background for some of our own research on algorithms for GPUs. More recently, I found “The View from Berkeley” paper to be an enlightening survey of the landscape of parallel algorithms.

There is a growing list of textbooks that focus on programming GPUs, several of them sponsored by NVIDIA. A very popular text is “Programming Massively Parallel Processors” by David Kirk and Wen-mei Hwu, which is used in many current parallel computing courses. A more introductory-level text is “CUDA by Example”, by Jason Sanders and Edward Kandrot. “GPU Gems” is a series of books with articles on GPGPU and computer graphics contributed by experts from industry and academia. All three GPU Gems books are available for free on developer.nvidia.com“GPU Computing Gems” is a new series of books focused entirely on GPU Computing.

This is just the beginning of a reading list, and it is obviously biased by my own interests and experience, but I hope it serves to spark interest and discussion. Please share your favorite parallel computing and programming resources in the comments below!

(Photo credit: katerha/Creative Commons)

Discuss (0)

Tags