panda-detectors

A CUDA Dynamic Parallelism Case Study: PANDA

This post concludes an introductory series on CUDA Dynamic Parallelism. In my first post, I introduced Dynamic Parallelism by using it to compute images of the Mandelbrot set using recursive subdivision, resulting in large increases in performance and efficiency. The second post is an in-depth tutorial on the ins and outs of programming with Dynamic Parallelism, including synchronization, streams, memory consistency, and limits. In this post, I finish the series with a case study on an online track reconstruction algorithm for the high-energy physics PANDA experiment part of the (Facility for Antiproton and Ion Research in Europe (FAIR)). The PANDA work was carried out in the scope of the NVIDIA Application Lab at Jülich.

The PANDA Experiment

PANDA (= anti-Proton ANnihilation at DArmstadt) is a state-of-the-art hadron particle physics experiment currently under construction at FAIR (Facility for Anti-proton and Ion Research) at Darmstadt. It is scheduled to start operation in 2019.

Figure 1: The PANDA experiment and its detectors. Image courtesy of PANDA Collaboration.

Inside the PANDA experiment, accelerated antiprotons will collide with protons, forming intermediate and unstable particles (mesons, baryons etc.), which will decay in cascades into stable particles, like electrons and photons. The unstable particles are of particular interest for PANDA, as they give insight into the processes governing this physics regime (QCD). Reconstructing all involved constituent particles of an event lets the physicists form a picture of the process, eventually confirming established physics theories, probing new ones and potentially finding exciting and unexpected results.

A typical particle physics experiment continuously reads out only a part of the whole detector. Only the data sub-samples from dedicated and fast trigger sub-detectors are processed at all times. When the trigger sub-system records a signal of a physical event of interest, such as a muon of particular momentum, it triggers the data recording of the whole detector system. Only then are all of the sub-detector layers read out and saved to disk for further offline analysis.

PANDA wants to do research in seldom and possibly exotic physics cases. Because of this, the experiment wants to use an online, triggerless event reconstruction. It will process the incoming data stream of the whole experiment on the fly, run sophisticated reconstruction algorithms and use this to decide whether or not to save a particular event for further, more precise and elaborate offline analysis.

Running with an event rate of 20 million events per second, this results in a very large amount of data. The structure of the antiproton beam, colliding in the center of the PANDA experiment, is as follows: 1600 ns long bursts, in which collisions are performed, are interleaved with collisionless 400 ns gaps. Each burst contains 40 events on average.

An important part of PANDA’s online reconstruction is online tracking. Tracking is the process of using the discrete detector responses (hits) particles leave while traversing active detector material to form the real, continuous trajectories of the particles (tracks). Two sub-detectors of the experiment are of particular interest for online tracking: the Straw Tube Tracker (STT), a sub-detector using more than 4000 tiny drift tubes (straws) to locate particles, and the Micro Vertex Detector (MVD), PANDA’s innermost, silicon-based detector with 10 million channels. Both are pointed out in Figure 1. Each event generates about 80 STT hits, which equates to 1.6 billion hits per second; the number of hits produced by the MVD is about a quarter of that. In total, the experiment will produce a data rate of 200 GB/s, which needs to be processed by the online reconstruction stage. Online processing is intended to reduce the total volume of data by a factor of 1000 (from 200 GB/s to 200 MB/s), which will then be stored for further processing by the scientists.

Triplet Finder Algorithm

One of the candidate algorithms for online tracking is the Triplet Finder, which operates on STT hits. The idea of the algorithm is to trade in accuracy for speed: running at a very early stage of the experiment’s reconstruction, the algorithm primarily has to be fast rather than precise. The precision needed for high quality physics can still be achieved later on in the reconstruction chain with dedicated, but slower running algorithms.
The Triplet Finder uses certain layers of straws in the STT to seed particle track reconstruction; these are called pivot straws. Currently, 3 rows are selected as the ones with pivot straws, as depicted in Figure 2.

Figure 2: Pivot and skewed straws in the STT detector. Image courtesy of PANDA Collaboration.

The algorithm starts by generating triplets for each tube in the pivot layers. A triplet is a virtual hit, using a center of gravity method to combine a hit in the pivot straw with two neighboring hits. A pair of triplets from different pivot layers is combined with the coordinate origin to form a single circular track candidate, as shown in Figure 3. A track candidate is validated by associating hits from the non-pivot layers with it. Hits need to be close to the track and surpass, in number, a certain threshold for the track candidate to be accepted.
Running the algorithm like this gives only two-dimensional information about the track in the plane orthogonal to the direction of the straws. To extract 3D information, a similar but more computationally demanding algorithm is applied to slightly tilted or skewed straws. For a full description of Triplet Finder, see the paper Triplet based online track finding in the PANDA-STT.

Figure 3: Track reconstruction in the Triplet Finder algorithm. Image courtesy of PANDA Collaboration.

Online reconstruction has to process large quantities of data, and also has a large amount of parallelism. Different bursts can be processed independently. If necessary, each burst can be further subdivided into smaller time intervals called “bunches”, which can also be processed in parallel. The Triplet Finder algorithm can thus be summarized as follows:

  • the entire set of hits is split into bunches, which can be processed in parallel;
  • each bunch is processed in a sequence of stages, which are performed sequentially;
  • within each stage, there is additional significant parallelism present;
  • a single bunch is not enough to utilize a GPU fully; to do that, O(10), or better, O(100) bunches are needed.

Implementing Triplet Finder with Dynamic Parallelism

The description of the task above yields very naturally to an implementation with CUDA Dynamic Parallelism. Each processing stage is implemented as a GPU kernel or a sequence of kernels. The host launches the reconstruction kernel, where each thread is responsible for a single bunch. Every thread then launches the kernels for its bunch, performing necessary synchronizations. As different bunches can be processed independently, each thread creates a single device stream to launch its kernels. To reduce false dependencies and provide better concurrency, the number of threads per block in the reconstruction kernel is set to a low value; we have found that 8 works well in practice. The entire kernel call tree is schematically depicted in Figure 4.

The complete source code for the CUDA dynamic parallelism implementation of the GPU Triplet Finder can be downloaded from PANDA SVN.

Figure 4: Schematic view of the kernel call tree for the Triplet Finder with CUDA Dynamic Parallelism (not all device-launched kernels are shown).

Performance Comparison

We compared the performance of our Dynamic Parallelism implementation of Triplet Finder with a version that uses the same stage processing kernels, but launches them all from the host using one stream per bunch. Different bunch sizes were required for each approach. For host streams, setting the CUDA_DEVICE_MAX_CONNECTIONS environment variable to 24 gave the best performance. Figure 5 demonstrates the relative performance of both approaches on an NVIDIA Tesla K20X GPU using CUDA 5.5 with driver version 331.22 and CUDA 6.0 with driver version 331.49. Clear performance improvement can be seen for both dynamic parallelism and host streams versions between CUDA 5.5 to CUDA 6.0.

Figure 5: Performance of the host streams and dynamic parallelism implementations of the Triplet Finder with varying number of hits on CUDA 6.0.

The Dynamic Parallelism implementation of Triplet Finder uses 1000 ns bunches, while for the host streams approach, 2000 ns bunches are used. This is because the host streams approach is limited by kernel launch throughput, and it is beneficial to have a smaller number of larger kernels, even if this leads to a larger amount of computation. For a reasonable number of hits, using dynamic parallelism results in 20% faster processing. Dynamic parallelism is better than host streams for 3 reasons.

  1. Avoiding extra PCI-e data transfers. The launch configuration of each subsequent kernel depends on results of the previous kernel, which are stored in device memory. For the dynamic parallelism version, reading these results requires just a single memory access, while for host streams a data transfer from device to host is needed.
  2. Achieving higher launch throughput with dynamic parallelism, when compared to launching from host using multiple streams.
  3. Reducing false dependencies between kernel launches. All host streams are served by a single thread, which means that when waiting on a single stream, work cannot be queued into other streams, even if the data for launch configuration is available. For dynamic parallelism, there are multiple thread blocks queuing kernels executed asynchronously.

The dynamic parallelism version is also more straightforward to implement, and leads to cleaner and more readable code. So in the case of PANDA, using dynamic parallelism improved both performance and productivity.

The work on using CUDA dynamic parallelism for PANDA has been presented at GTC 2014.

What Dynamic Parallelism is Not

With all the advantages of Dynamic Parallelism, however, it is also important to understand when not to use it. In all of the examples in this series of posts, when a child kernel is launched, it always consisted of a large number of threads, and also of a large number of thread blocks. It is possible in some cases for each child grid to consist of a single thread block, which still contains O(100) threads. However, it is a bad idea to start a child grid with just a few threads. This would lead to severe under-utilization of GPU parallelism, since only a few threads in a warp would be active, and there would not be enough warps. This would also lead to GPU execution dominated by kernel launch latencies.

Nested parallelism can be visualized in a form of a tree. But the tree for dynamic parallelism will look very different from more traditional CPU tree processing, as summarized in the following table.

characteristic tree processing dynamic parallelism
node thin (1 thread) thick (many threads)
branch degree small (usually < 10) large (usually > 100)
depth large small

We can now see that Dynamic Parallelism is useful when it results in fat but relatively shallow trees. Adaptive grid algorithms often result in such trees. Traditional tree processing, in contrast, usually involves trees with small node degree but large depth, and is not a good use case for dynamic parallelism.

Conclusions

Dynamic parallelism provides clear benefits for applications that exhibit nested parallelism patterns. This applies to applications using hierarchical data structures, as well as applications processing non-uniform bunches of work. As we have seen in this series with both the Mariani-Silver algorithm and bunch processing for the Triplet Finder, using dynamic parallelism allows us to write such applications in a natural and easy-to-understand way. Moreover, in addition to better productivity, using dynamic parallelism also increases the performance and efficiency of our code.

∥∀

About Andrew Adinetz

Andrew Adinetz
Andrew V. Adinetz got his M.S. degree in Computer Science in 2006 from Lomonosov Moscow State University, and his Ph.D. in Computer Science in 2009, also from MSU. He's currently working as a researcher at Forschungszentrum Jülich (NVidia Application Lab, Jülich Supercomputing Centre). His current research interests include GPU programming, algorithm design for many-core architectures, high-performance computing and programming languages. Follow @adinetz on Twitter