CUDA Pro Tip: Kepler Texture Objects Improve Performance and Flexibility

The Kepler architecture introduces texture objects, a new feature that makes textures easier to use and higher performance.

Texture References

Textures are likely a familiar concept to anyone who’s done much CUDA programming. A feature from the graphics world, textures are images that are stretched, rotated and pasted on polygons to form the 3D graphics we are familiar with. Using textures for GPU computing has always been a pro tip for the CUDA programmer; they enable fast random access to arrays and use a cache to provide bandwidth aggregation.   On the flip side, the legacy texture reference API is cumbersome to use because it requires manual binding and unbinding of texture references to memory addresses, as the following code demonstrates. Also, texture references can only be declared as static global variables and cannot be passed as function arguments.

#define N 1024
texture<float, 1, cudaReadModeElementType> tex;

// texture reference name must be known at compile time
__global__ void kernel() {
  int i = blockIdx.x *blockDim.x + threadIdx.x;
  float x = tex1Dfetch(tex, i);
  // do some work using x...

void call_kernel(float *buffer) {
  // bind texture to buffer
  cudaBindTexture(0, tex, buffer, N*sizeof(float));

  dim3 block(128,1,1);
  dim3 grid(N/block.x,1,1);
  kernel <<<grid, block>>>();

  // unbind texture from buffer

int main() {
  // declare and allocate memory
  float *buffer;
  cudaMalloc(&buffer, N*sizeof(float));

Texture Objects

Kepler GPUs and CUDA 5.0 introduce a new feature called texture objects (sometimes called bindless textures, since they don’t require manual binding/unbinding) that greatly improves the usability and programmability of textures. Texture objects use the new cudaTextureObject_t class API, whereby textures become first-class C++ objects and can be passed as arguments just as if they were pointers.  There is no need to know at compile time which textures will be used at run time, which enables much more dynamic execution and flexible programming, as shown in the following code.

#define N 1024

// texture object is a kernel argument
__global__ void kernel(cudaTextureObject_t tex) {
  int i = blockIdx.x *blockDim.x + threadIdx.x;
  float x = tex1Dfetch<float>(tex, i);
  // do some work using x ...

void call_kernel(cudaTextureObject_t tex) {
  dim3 block(128,1,1);
  dim3 grid(N/block.x,1,1);
  kernel <<<grid, block>>>(tex);

int main() {
  // declare and allocate memory
  float *buffer;
  cudaMalloc(&buffer, N*sizeof(float));

  // create texture object
  cudaResourceDesc resDesc;
  memset(&resDesc, 0, sizeof(resDesc));
  resDesc.resType = cudaResourceTypeLinear;
  resDesc.res.linear.devPtr = buffer;
  resDesc.res.linear.desc.f = cudaChannelFormatKindFloat;
  resDesc.res.linear.desc.x = 32; // bits per channel
  resDesc.res.linear.sizeInBytes = N*sizeof(float);

  cudaTextureDesc texDesc;
  memset(&texDesc, 0, sizeof(texDesc));
  texDesc.readMode = cudaReadModeElementType;

  // create texture object: we only have to do this once!
  cudaTextureObject_t tex=0;
  cudaCreateTextureObject(&tex, &resDesc, &texDesc, NULL);

  call_kernel(tex); // pass texture as argument

  // destroy texture object


Moreover, texture objects only need to be instantiated once, and are not subject to the hardware limit of 128 texture references, so there is no need to continuously bind and unbind them. (The Kepler hardware limit is over one million texture objects, a limit that will likely never be reached by any practical CUDA application in the lifetime of the architecture!) Using texture objects, the overhead of binding (up to 1 μs) and unbinding (up to 0.5 μs) textures is eliminated. What is not commonly known is that each outstanding texture reference that is bound when a kernel is launched incurs added launch latency—up to 0.5 μs per texture reference. This launch overhead persists even if the outstanding bound textures are not even referenced by the kernel. Again, using texture objects instead of texture references completely removes this overhead.

Strong Scaling in HPC

While these overheads may sound minor for any application that uses kernels that run for milliseconds or greater in duration, they are important for latency-sensitive, short-running kernels. High Performance Computing (HPC) applications that need strong scaling typify this use case. Strong scaling is how the solution time varies with the number of processors for a fixed global problem size. The goal is to solve a fixed problem in as short a time as possible. There is usually a limit to strong scaling, as at some point an application will become bound by the inter-processor communication bandwidth or latencies that arise from doing less work on each processor. Beyond this limit the total performance will actually decrease.

Performance of the QUDA solver using both texture references and bindless textures.
Figure 1: Strong-scaling performance of the QUDA solver using both texture references and texture objects.

There are many large-scale HPC applications that will be running on the Titan supercomputer at Oak Ridge National Laboratory in Tennessee, for which achieving excellent strong scaling performance will be vital.  Lattice Quantum Chromodynamics (LQCD) is one such application. LQCD uses a combination of sparse linear solvers, molecular dynamics algorithms and Monte Carlo methods to probe the structure of the nucleus. Using the highly optimized QUDA library, legacy LQCD applications MILC and Chroma can run on Titan making full use of the attached Tesla K20X accelerators. The QUDA library makes extensive use of textures, and using texture references the strong scaling tops out at 2 TFLOP/s, as you can see in Figure 1. Profiling using the CUDA visual profiler revealed that the overhead of texture binding and unbinding contributed significantly to time-critical communication routines running on the GPU. Moreover, many of the kernels run for O(10μs) using up to ten textures simultaneously. Thus the launch latency overhead is a significant bottleneck. Once the library was rewritten to use texture objects, strong scaling performance increased significantly and throughput increased to nearly 3 TFLOP/s at 27 GPUs (see Figure 1).

Texture objects are yet another powerful feature of the Kepler architecture that make it easier to achieve high performance and scalability with CUDA.


About M Clark

M Clark
Dr. Clark’s background is in high energy physics, having completed doctoral research in Monte Carlo algorithms for lattice quantum chromodynamics in 2005, graduating from the University of Edinburgh. Upon subsequently moving to Boston University, Dr Clark focused upon adaptive multi-grid algorithms and symplectic integrators. It was during this time that research was initiated into harnessing GPUs for lattice QCD computation: this research has since evolved into the QUDA library. Dr. Clark spent 2009-2011 at Harvard University, continuing to work on algorithms for GPUs and many-core processors, with focus on signal processing. Dr. Clark moved to NVIDIA in 2011, and continues to work at the interface between applications, algorithms and parallel computation.
  • Jason Bourne


    There are a couple of bugs in this code: viz. the case of token cudatextureObject_t should actually be cudaTextureObject_t, there are invalid symbols such as a double quote in kernel launch code.

    Importantly, upon trying this code my compiler complains :

    error: more than one instance of overloaded function “tex1Dfetch” matches the argument list:
    Making a similar setup for 2D texture, I get:
    error : more than one instance of overloaded function “tex2D” matches the argument list:

    My platform is:

    Windows 7, Visual Studio 2012, NVIDIA Nsight 3.2, CUDA 5.5 Runtime Project, Debug Compile, compute capability set to compute_35,sm_35
    GK110 Card (GTX 780), Intel i7.

    • Hi,

      After I added a template type to “tex1Dfetch”, I could compile the code above.

      I mean,

      float x = tex1Dfetch(tex, i);

      should be

      float x = tex1Dfetch<float>(tex, i);

      I hope this helps you :-)

  • M Clark

    The code snippets in the blog post have now been rectified. Thanks for posting.

  • Neuro CSUT


    We just bought a PC with NVIDIA card and are trying to run different example code from different sources. We ran into trouble now with the command:

    cudaCreateTextureObject(&tex, &resDesc, &texDesc, NULL);

    It seems that in our computer the function fails to assign a value to “tex”.
    When adding an assertion in the next line:
    assert(tex != 0);

    the assertion fails.
    We have also tried to surround the object creation command with checkCudaErrors() in hope to get more information, but it gives no error message (so the program thinks everything is fine).

    Could you help us sort it out? We really have run out of ideas here and we have not found any hints online either.

    Our system specs:
    Ubuntu 14.04
    GeForce GTX 750
    Driver Version: 331.62
    Cuda toolkit 6.0
    compiling with parameter “–gpu-architecture sm_50”

    PS! Also worth mentioning that we have a large-scale example using Cuda via
    python/pylearn2/Theano running without problems. So in other cases out Cuda works fine and thus it seems a very specific issue.

    • Antti-Pekka Hynninen

      I have the same problem on MacBook Pro with GTX 750M. Cuda 6.5

      • Submitted a bug report in Cuda Zone, also mentioned that it does not work on your system. I will let you know if we get help from there. Hope for the best.

        • Can you provide the bug ID number?

          • Bug ID is 1562891

          • I looked at the bug, and our QA was able to reproduce the problem. It is now being investigated. Sorry for the inconvenience, we’ll resolve it as soon as we can.

          • Please inform us as soon as any progress is made on fixing the bug, so we could plan our work.