12 GTC 2015 Sessions Not to Miss

With one week to go until we all descend on GTC 2015, I’ve scoured through the list of Accelerated Computing sessions and put together 12 diverse “not to miss” talks you should add to your planner. This year, the conference is highlighting the revolution in Deep Learning that will affect every aspect of computing. GTC 2015 includes over 40 session categories, including deep learning and machine learning, scientific visualization, cloud computing, and HPC.

This is the place where scientists, programmers, researchers, and a
myriad of creative professionals convene to tap into the power of a GPU
for more than gaming. –Forbes

Tuesday, March 17

An Introduction to CUDA Programming (S5661)


This is the introductory tutorial intended for those new to CUDA and you will leave with the essential knowledge to start programming in CUDA – no experience is needed! For those that have prior CUDA experience, this is a great session to brush up on key concepts required for subsequent tutorials on CUDA optimization. The other tutorials in this session are: An Introduction to the GPU Memory Model, Asynchronous Operations and Dynamic Parallelism in CUDA and Essential CUDA Optimization Techniques.

GTC attendees learn from the brightest minds in accelerated computing with hundreds of talks and hands-on tutorials.
GTC attendees learn from the brightest minds in accelerated computing with hundreds of talks and hands-on tutorials.

SMTool: A GPU based Satellite Image Analysis Tool (S5201)


Dilip Patlolla, R&D Engineer in the Geographic Information Science and Technology (GIST) Group at the Oak Ridge National Laboratory, will demonstrate their advanced satellite image analytic tool referred as SMTool built on the CUDA platform to process city-scale sub-meter resolution satellite imagery to detect and discriminate man-made structures.

Speech: The Next Generation (S5631)


Bryan Catanzaro is a research scientist at Baidu’s new Silicon Valley Artificial Intelligence Laboratory, working with Adam Coates and Andrew Ng to create next generation systems for deep learning. In his talk, he will show how next generation deep learning models can provide state-of-the-art speech recognition performance. We train these models using clusters of GPUs using CUDA, MPI and Infiniband.

Wednesday, March 18

VMD: Visualization and Analysis of Biomolecular Complexes with GPU Computing (S5371)


John Stone, 2010 CUDA Fellow and Senior Research Programmer in the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign will showcase recent successes in the use of GPUs to accelerate challenging molecular visualization and analysis tasks on hardware platforms ranging from commodity desktop computers to the latest Cray supercomputers.

OpenACC for Fortran Programmers (S5388)


Michael Wolfe, Compiler Engineer at NVIDIA and author of “High Performance Compilers for Parallel Computing,” will introduce OpenACC to new GPU and OpenACC programmers, providing the basic material necessary to start successfully using GPUs for your Fortran programs. He will also be sharing advanced hints and tips for Fortran programmers with larger applications that they want to accelerate with a GPU. Among the topics to be covered will be dynamic device data lifetimes, global data, procedure calls, derived type support, and much more.

High Performance In-Situ Visualization with Thousands of GPUs


In-situ visualization is one of the major themes in HPC. The ability to attach a massively parallel visualization tool to a live simulation can be valuable to the researches, whose simulations may last days or even weeks on a supercomputer. Evghenii Gaburov of SURFSara, a CUDA Research Center and Jeroen Bédor of CWI, will present their first attempt at this by using the US Titan and Swiss Piz Daint supercomputers that allowed them to achieve \~10fps on 1024 GPUs.

Thursday, March 19

GPUs and Machine Learning: A Look at cuDNN (S5331)


Sharan Chetlur, Software Engineer at NVIDIA will talk on cuDNN, NVIDIA’s in-house CUDA library of deep learning primitives. Addressing the demand from engineers and data scientists, we created a library similar in intent to BLAS, with optimized routines for deep learning workloads. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage across GPU generations.

High-Performance Broadcast with GPUDirect RDMA and InfiniBand Hardware Multicast for Streaming Applications (S5507)


Author of over 350 papers and a Professor at The Ohio State University, Dhabaleswar K. (DK) Panda, will talk on the latest developments in middleware design that boosts the performance of GPGPU based streaming applications. Several middlewares already support communication directly from GPU device memory and optimize it using various features offered by the CUDA toolkit, providing optimized performance. The focus will be on the challenges in combining and fully utilizing GPUDirect RDMA and hardware multicast features in tandem to design support for high performance broadcast operation for streaming applications.

GPU-Accelerated Imaging Processing for NASA’s Solar Dynamics Observatory (S5209)


Mark Cheung is an Astrophysicist at the Lockheed Martin Solar and Astrophysics Lab in Palo Alto, CA and he has worked on a number of NASA-sponsored solar missions. Dr. Cheung will show how the instrument teams have deployed CUDA-enabled GPUs to perform deconvolution of SDO images and demonstrate how they leveraged cuFFT and Thrust to implement an efficient image processing pipeline.

Friday, March 20

The Ramses Code for Numerical Astrophysics: Toward Full GPU Enabling (S5531)


The evolution of the universe is an extraordinarily fascinating and, of course, complex problem. Scientists use the most advanced simulation codes to try to describe and understand the origin and the behavior of the incredible variety of objects that populate it: stars, galaxies, black holes… Dr. Claudio Gheller is currently working as computational scientist at the Swiss National Supercomputing Center (CSCS), which is part of ETH Zurich will present one of these codes, Ramses, and the ongoing work to enable this code to efficiently exploit GPUs through the adoption of the OpenACC programming model. The most recent achievement will be shown together with some of the scientific challenges GPUs can help addressing.

11:00-11:50am Featured Talk: Recent Advances in GPU-Accelerated Speech and Language Processing (S5634)


Ian Lane is an Assistant Professor at Carnegie Mellon University. He leads the speech and language-processing group at CMU Silicon Valley and performs research in the areas of Speech Recognition, Spoken Language Understanding and Human Computer Interaction. In his talk, he will give an overview of the state-of-the-art in Deep Learning for Speech and Language Processing and present recent work at CMU on GPU-Accelerated methods for Real-Time Speech and Language Processing, joint optimization for Spoken Language Understanding, and continuous on-line Learning methods.

DeepFont: Large-Scale Real-World Font Recognition from Images (S5720)


Adobe Research Scientist, Jianchao Yang, addresses the problem of recognizing font style of the text from an image. He will introduce a convolutional neural network decomposition approach to obtain effective features for classification, which is done based on stacked convolutional auto encoders. Millions of images are used in the model, which could not have been trained without the GPU and CUDA. The proposed DeepFont system achieves top-5 accuracy of over 80% on a large labeled real-world test set they collected.

This is only a sampling of the hundreds of Accelerated Computing sessions to be presented by innovators and researchers using GPU computing for their groundbreaking work, be sure to visit the GTC website for the complete list.

Readers of Parallel Forall can use the discount code GM15PFAB to get 20% off any conference pass! Don’t miss out and register now!

Interested in Hands-On Training?

Be sure to check out Mark Ebersole’s recent blog, “Learn GPU Computing with Hands-On Labs at GTC 2015”.

Annual HPC and GPU Supercomputing Group of Silicon Valley Meetup at GTC

It’s back! Be sure to join the Meetup on Thursday, March 19th and network with other scientists, researchers and engineers using GPUs to accelerate their applications. We’re excited to have Andrej Karpathy of Stanford University and Google Researcher joining us – he will be talking about his Machine Learning and Deep Learning research: “Automated Image Captioning with ConvNets and Recurrent Nets”. Space is limited, so RSVP soon.

No Comments