My Favorites from GTC 2012

I had a great time at GTC 2012; there was incredible energy and excitement from everyone I talked to. I think the energy level had a lot to do with all of the exciting technology announcements from NVIDIA (KeplerGK110 architectureCUDA 5NSight Eclipse EditionVGX, and GeForce Grid, to name a few!), but I think that having heaps of great content from outside contributors as well as NVIDIA was crucial to making GTC 2012 a great conference.

I was very busy at GTC and so I didn’t get to attend many of the talks that I wanted to see, and I’m sure the same is true for most attendees. Thankfully the GTC team created GTC On Demand to solve this problem.  With GTC On Demand you can watch almost any GTC talk, free, on the web. In the past month or so since GTC, I’ve been catching up on some of the great talks that I missed. In this post I want to share with you my favorites from GTC, and I hope you will share your favorite GTC talks in the comments.

The GTC keynotes were a lot of fun, and they are my first recommendations to watch if you have not already.  NVIDIA’s CEO Jen-Hsun Huang’s opening keynote covers all of the big announcements for GTC, as well as lots of fun demos.  Iain Couzin’s keynote “From Democratic Consensus to Canaballistic Hordes” provided a fascinating overview of his group’s research on the collective behaviour of swarms and crowds.  Finally, “Not your Grandfather’s Moonlanding“, from Robert Boehme and Wes Faler of Part-Time Scientists provided an inspiring look at their bid for the Lunar X Prize.

Two of my favorite sessions from GTC have already been discussed on Parallel Forall. I will just link to their streams (and the previous posts about them) without elaboration.

  • Stephen Jones and and Lars Nyland from NVIDIA covered the new GK110 architecture in Inside Kepler, covered previously here.
  • I co-presented a 3-part OpenACC tutorial. Part 1, which Duncan Poole and I presented, was already discussed here. Here are parts 2 (Cliff Wooley) and 3.  I especially enjoyed Michael Wolfe’s (PGI) presentation in part 3.

Vinod Grover and Yuan Lin (NVIDIA Compiler group) presented “Compiling CUDA and Other Languages for GPUs“. The key words in that title are “other languages”, because this talk was about NVVM, NVIDIA’s open compiler infrastructure based on LLVM. In the talk Vinod and Yuan discussed the NVVM libraries and SDK and how they enable general-purpose programming languages and domain-specific languages (DSLS) to target GPUs.  They covered details of several samples from the NVVM SDK. I was very excited by this talk, because it really made it clear how straightforward it is for language and compiler developers to target GPUs using the NVVM SDK. Judging by the size of the audience (the large room was packed), many developers are as excited by this prospect as I am!

Dan Bailey (Double Negative Studios), presented “Jet: A Domain-Specific Approach to Parallelism for Film Fluid Simulation“.  This talk provides a great example of how domain-specific languages can take advantage of massive parallelism on GPUs, in this case for high-performance simulation of fluid dynamics for films. Our new LibNVVM SDK, a preview of which is available now to registered developers, will make targetting DSLs (as well as general-purpose programming languages) to GPUs much easier. to It’s not surprising since he works in the film industry, but Dan’s presentation is full of eye candy. His slides are not really slides, but animations, and he includes an awesome film reel with a high-energy dubstep soundtrack at the end. Well worth watching! (The video is embedded at the top of this post.)

Bryan Catanzaro (NVIDIA Research), presented “Copperhead: Data-Parallel Python“, about a compiler for data-parallel extensions to Python that he has developed. Copperhead is currently research code (available on Github), which makes it really easy to write fast, parallel programs for the GPU in Python.

Paulius Micikevicius is one of NVIDIA’s most skilled CUDA programmers, as well as a great teacher of CUDA programming. Paulius gave two informative talks that I watched on GTC On-demand.  “Multi-GPU Programming“, covers general principles and implementation strategies for programming multi-GPU applications with CUDA. “GPU Performance Analysis and Optimization” is a detailed presentation covering fundamentals as well as advanced topics in CUDA optimization. This 2+-hour presentation really shows off the depth of Paulius’ knowledge of high-performance programming (on GPUs and CPUs!). I highly recommend both of these for programmers targetting the CUDA platform.

Finally, another packed session that I really enjoyed was Stephen Jones’ (NVIDIA) talk about CUDA Dynamic Parallelism, modestly entitled  “New Features in the CUDA Programming Model“. In this talk Stephen provides a lot of interesting detail about CUDA Dynamic Parallelism (much more detail than is provided in the Inside Kepler talk or the CUDA 5 and Beyond talk).

I hope you find all of these GTC 2012 sessions as interesting as I did. I’m sure there are a lot of other interesting sessions—go find them at GTC On Demand and share your favorites with us below in the comments.

 

∥∀

About Mark Harris

Mark is Chief Technologist for GPU Computing Software at NVIDIA. Mark has fifteen years of experience developing software for GPUs, ranging from graphics and games, to physically-based simulation, to parallel algorithms and high-performance computing. Mark has been using GPUs for general-purpose computing since before they even supported floating point arithmetic. While a Ph.D. student at UNC he recognized this nascent trend and coined a name for it: GPGPU (General-Purpose computing on Graphics Processing Units), and started GPGPU.org to provide a forum for those working in the field to share and discuss their work. Follow @harrism on Twitter