This week’s CUDA Spotlight is on Jon Rogers of Texas A&M University. Jon is director of the Helicopter and Unmanned Systems Lab, where he works on new technologies for autonomous systems.
He is currently exploring new algorithms and sensing technologies to increase task complexity of robotic devices. His research encompasses the fields of non-linear dynamics, robust control, and high-performance computing.
You can read Jon’s full Spotlight here. Here is an excerpt.
NVIDIA: What problems has CUDA helped you solve?
Jon: CUDA has provided an entry point to GPU programming and execution that is highly compatible with our current guidance and control software. As we search for new ways to incorporate uncertainty quantification in real-time guidance laws, we are naturally drawn to GPU-based Monte Carlo due to its flexibility in handling nonlinear dynamics and non-Gaussian behavior.
We leverage CUDA primarily for parallel trajectory simulation, which means we have developed dynamic models for several vehicles (mostly aircraft) that run within a GPU kernel. Launching thousands of threads means we can run numerous dynamic simulations at once.
CUDA specifically has allowed us to take existing codes and port them to the GPU relatively quickly. The core of the GPU codes we run today were originally built for CPU execution and validated extensively with experimental data. The ability to leverage legacy simulation codes in this manner has been a key enabler. It is also convenient that the same CUDA software we use for our desktop simulation codes can be run on embedded GPUs on-board our robotic vehicles with minimal changes.
NVIDIA: What specific approaches did you use to apply the CUDA platform to your work?
Jon: One specific technique that comes to mind is texture memory interpolation, which we use in path planning for aerial robotic vehicles. Often times we must determine if a candidate path prematurely impacts terrain using an on-board terrain database. For high-resolution terrain data (i.e., in mountainous areas), interpolation along the path may be very time-consuming. This is especially true when evaluating hundreds of candidate paths in real-time. We bind our terrain database to texture memory, which has led to orders-of-magnitude reduction in the time required for terrain interpolation during impact analysis.
Our lab is becoming increasingly interested in embedded GPU hardware as we take these new control laws and port them to vehicles for testing. Some new embedded GPU devices that have been recently released by NVIDIA and others will allow us to do just that. For our research, low power requirements and small size are critical.