CUDA Pro Tip: Occupancy API Simplifies Launch Configuration

CUDA programmers often need to decide on a block size to use for a kernel launch. For key kernels, its important to understand the constraints of the kernel and the GPU it is running on to choose a block size that will result in good performance. One common heuristic used to choose a good block size is to aim for high occupancy, which is the ratio of the number of active warps per multiprocessor to the maximum number of warps that can be active on the multiprocessor at once. Higher occupancy does not always mean higher performance, but it is a useful metric for gauging the latency hiding ability of a kernel.

Release Candidate Available
Become a CUDA Registered Developer and download now!

Before CUDA 6.5, calculating occupancy was tricky. It required implementing a complex computation that took account of the present GPU and its capabilities (including register file and shared memory size), and the properties of the kernel (shared memory usage, registers per thread, threads per block). Implementating the occupancy calculation is difficult, so very few programmers take this approach, instead using the occupancy calculator spreadsheet included with the CUDA Toolkit to find good block sizes for each supported GPU architecture.

CUDA 6.5 includes several new runtime functions to aid in occupancy calculations and launch configuration. The core occupancy calculator API, cudaOccupancyMaxActiveBlocksPerMultiprocessor produces an occupancy prediction based on the block size and shared memory usage of a kernel. This function reports occupancy in terms of the number of concurrent thread blocks per multiprocessor. Note that this value can be converted to other metrics. Multiplying by the number of warps per block yields the number of concurrent warps per multiprocessor; further dividing concurrent warps by max warps per multiprocessor gives the occupancy as a percentage.

CUDA 6.5 also introduces occupancy-based launch configurator APIs, cudaOccupancyMaxPotentialBlockSize and cudaOccupancyMaxPotentialBlockSizeVariableSMem, which heuristically calculate a block size that achieves the maximum multiprocessor-level occupancy. You can use the VariableSmem version for kernels where the amount of shared memory allocated depends on the number of threads per block. Note that there are also CUDA driver API equivalents of these functions. The following example demonstrates the use of these APIs. It first chooses a reasonable block size by calling cudaOccupancyMaxPotentialBlockSize, and then calculates the theoretical maximum occupancy the kernel will achieve on the present device by calling cudaGetDeviceProperties and cudaOccupancyMaxActiveBlocksPerMultiprocessor.

#include "stdio.h"

__global__ void MyKernel(int *array, int arrayCount) 
  int idx = threadIdx.x + blockIdx.x * blockDim.x; 
  if (idx < arrayCount) 
    array[idx] *= array[idx]; 

void launchMyKernel(int *array, int arrayCount) 
  int blockSize;   // The launch configurator returned block size 
  int minGridSize; // The minimum grid size needed to achieve the 
                   // maximum occupancy for a full device launch 
  int gridSize;    // The actual grid size needed, based on input size 

  cudaOccupancyMaxPotentialBlockSize( &minGridSize, &blockSize, 
                                      MyKernel, 0, 0); 
  // Round up according to array size 
  gridSize = (arrayCount + blockSize - 1) / blockSize; 

  MyKernel<<< gridSize, blockSize >>>(array, arrayCount); 


  // calculate theoretical occupancy
  int maxActiveBlocks;
  cudaOccupancyMaxActiveBlocksPerMultiprocessor( &maxActiveBlocks, 
                                                 MyKernel, blockSize, 

  int device;
  cudaDeviceProp props;
  cudaGetDeviceProperties(&props, device);

  float occupancy = (maxActiveBlocks * blockSize / props.warpSize) / 
                    (float)(props.maxThreadsPerMultiProcessor / 

  printf("Launched blocks of size %d. Theoretical occupancy: %f\n", 
         blockSize, occupancy);

cudaOccupancyMaxPotentialBlockSize makes it possible to compute a reasonably efficient execution configuration for a kernel without having to directly query the kernel’s attributes or the device properties, regardless of what device is present or any compilation details. This can greatly simplify the task of frameworks (such as Thrust), that must launch user-defined kernels. This is also handy for kernels that are not primary performance bottlenecks, where the programmer just wants a simple way to run the kernel with correct results, rather than hand-tuning the execution configuration.

The CUDA Toolkit version 6.5 also provides a self-documenting, standalone occupancy calculator and launch configurator implementation in <CUDA_Toolkit_Path>/include/cuda_occupancy.h for any use cases that cannot depend on the CUDA software stack. A spreadsheet version of the occupancy calculator is also included (and has been for many CUDA releases). The spreadsheet version is particularly useful as a learning tool that visualizes the impact of changes to the parameters that affect occupancy (block size, registers per thread, and shared memory per thread). You can find more information in the CUDA C Programming Guide and CUDA Runtime API Reference.


About Mark Harris

Mark is Chief Technologist for GPU Computing Software at NVIDIA. Mark has fifteen years of experience developing software for GPUs, ranging from graphics and games, to physically-based simulation, to parallel algorithms and high-performance computing. Mark has been using GPUs for general-purpose computing since before they even supported floating point arithmetic. While a Ph.D. student at UNC he recognized this nascent trend and coined a name for it: GPGPU (General-Purpose computing on Graphics Processing Units), and started to provide a forum for those working in the field to share and discuss their work. Follow @harrism on Twitter
  • Joseph Pingenot

    Nice. That looks quite useful!

  • mpeniak


  • karthikeyan Natarajan

    I required this one for very long time and wrote this myself using cuda 5.5.

    Also, I figured some mistakes in cuda occupancy calculator xls. I fixed them and wrote an online version here.
    Hope this helps someone.

  • amyvnee

    How does it look when we try 2d or even 3d block?

    • Mark Harris

      For now you will need to compute your own 2D/3D block dimensions from the 1D thread counts suggested by the API.

  • Omar Valerio

    Hello Mark,

    This API looks great. I compiled the example you provided above using CUDA 6.5 install. Also wanted to comment that I got a warning concerning the method signature for the kernel parameter.

    $ nvcc
    /usr/local/cuda-6.5/bin/../targets/x86_64-linux/include/cuda_runtime.h(1394): warning: argument of type “void (*)(int *, int)” is incompatible with parameter of type “const void *”
    detected during:
    instantiation of “cudaError_t ::cudaOccupancyMaxPotentialBlockSizeVariableSMem(int *, int *, T, UnaryFunction, int) [with UnaryFunction=::__cudaOccupancyB2DHelper, T=void (*)(int *, int)]”
    (1278): here
    instantiation of “cudaError_t ::cudaOccupancyMaxPotentialBlockSize(int *, int *, T, size_t, int) [with T=void (*)(int *, int)]”

    Nevertheless the code is running fine. I just wanted to tell in case someone else experienced this. I should also tell my compiler is gcc
    $ gcc –version
    gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3


  • Eduard Bondarenko

    Launched blocks of size 768. Theoretical occupancy: 0.000000

    GPU – Tesla C2075

    Why I have 0 occupancy when I use cudaSetDevice and GPU provided above ?

    • Mark Harris

      What are you using to measure Theoretical occupancy? What are the resources used by your kernel (registers per thread, shared memory per block)?

  • sahmes

    Hi, very helpful, thanks! However I have a kernel where the amount of shared memory depends on the block dimensions, what can I do in this case?