About Calisa Cole

Calisa Cole
Calisa joined NVIDIA in 2003 and currently focuses on marketing related to CUDA, NVIDIA’s parallel computing architecture. Previously she ran Cole Communications, a PR agency for high-tech startups. She majored in Russian Studies at Wellesley and earned an MA in Communication from Stanford. Calisa is married and the mother of three boys. Her favorite non-work activities are fiction writing and playing fast games of online scrabble.

CUDA Spotlight: GPU-Accelerated Agent-Based Simulation of Complex Systems

Paul-RichmondThis week’s Spotlight is on Dr. Paul Richmond, a Vice Chancellor’s Research Fellow at the University of Sheffield (a CUDA Research Center). Paul’s research interests relate to the simulation of complex systems and to parallel computer hardware.

The following is an excerpt from our interview (read the complete Spotlight here).

NVIDIA: Paul, tell us about FLAME GPU.
Paul: Agent-Based Simulation is a powerful technique used to assess and predict group behavior from a number of simple interacting rules between communicating autonomous individuals (agents). Individuals typically represent some biological entity such as a molecule, cell or organism and can therefore be used to simulate systems at varying biological scales.

The Flexible Large-scale Agent Modelling Environment for the GPU (FLAME GPU) is a piece of software which enables high level descriptions communicating agents to be automatically translated to GPU hardware. With FLAME GPU, simulation performance is enormously increased over traditional agent-based modeling platforms and interactive visualization can easily be achieved. The GPU architecture and the underlying software algorithms are abstracted from users of the FLAME GPU software, ensuring accessibility to users in a wide range of domains and application areas.

NVIDIA: How does FLAME GPU leverage GPU computing?
Paul: Unlike other agent-based simulation frameworks, FLAME GPU is designed from the ground up with parallelism in mind. As such it is possible to ensure that agents and behavior are mapped to the GPU efficiently in a way which minimizes data transfer during simulation. Continue reading


CUDA Spotlight: GPU-Accelerated Speech Recognition

Ian-Lane-CMUThis week’s Spotlight is on Dr. Ian Lane of Carnegie Mellon University. Ian is an Assistant Research Professor and leads a speech and language processing research group based in Silicon Valley. He co-directs the CUDA Center of Excellence at CMU with Dr. Kayvon Fatahalian.

The following is an excerpt from our interview (read the complete Spotlight here).

NVIDIA: Ian, what is Speech Recognition?
Ian: Speech Recognition refers to the technology that converts an audio signal into the sequence of words that the user spoke. By analyzing the frequencies within a snippet of audio, we can determine what sounds within spoken language a snippet most closely matches, and by observing sequences of these snippets we can determine what words or phrases the user most likely uttered.

Speech Recognition spans many research fields, including signal processing, computational linguistics, machine learning and core problems in computer science, such as efficient algorithms for large-scale graph traversal. Speech Recognition also is one of the core technologies required to realize natural Human Computer Interaction (HCI). It is becoming a prevalent technology in interactive systems being developed today.

NVIDIA: What are examples of real-world applications?
Ian: In recent years, speech-based interfaces have become much more prevalent, including applications such as virtual personal assistants, which include systems such as Siri from Apple or Google Voice Search, as well as speech interfaces for smart TVs and in-vehicle systems. Continue reading


CUDA Spotlight: GPU-Accelerated Quantum Chemistry


This week’s Spotlight is on Professor Todd Martínez of Stanford.

Professor Martínez’ research lies in the area of theoretical chemistry, emphasizing the development and application of new methods which accurately and efficiently capture quantum mechanical effects.

Professor Martínez pioneered the use of GPU technology for computational chemistry, culminating in the TeraChem software package that uses GPUs for first principles molecular dynamics. He is a founder of PetaChem, the company that distributes this software.

The following is an excerpt from our interview (you can read the complete Spotlight here).

NVIDIA: Todd, tell us about TeraChem.
Todd: TeraChem simulates the dynamics and motion of molecules, solving the electronic Schrodinger equation to determine the forces between atoms. This is often called first principles molecular dynamics or ab initio molecular dynamics.

The primary advantage over empirical force fields (for example, often used for protein structure) is that chemical bond rearrangements and electron transfer can be described seamlessly. Continue reading


If the Virtual Zapato Fits, Wear It! (GPU-Accelerated Augmented Reality)

Foto_NestorThis week’s Spotlight is on Néstor Gómez, CEO of Artefacto Estudio in Mexico City.

Artefacto Estudio is a developer of interactive applications and games. The company’s projects include a real-time virtual shoe fitting kiosk that allows people to “try on” shoes using augmented reality powered by Microsoft Kinect and GPU computing (see the video).

The following is an excerpt from our interview (you can read the complete Spotlight here).

NVIDIA: Néstor, tell us a bit about Artefacto Estudio.
Néstor: Artefacto is an independent development studio. We integrate solutions using cutting-edge technologies like Microsoft Kinect, Oculus Rift and Leap Motion.

NVIDIA: How did you become involved in the shoe industry?
Néstor: An ad agency, Kempertrautmann, was seeking a technology partner to work on a prototype for a virtual shoe fitting exhibit for Goertz, the German shoe company.

NVIDIA: Tell us about the prototype you created for Goertz. Continue reading


CUDA Spotlight: GPU-Accelerated Cosmology

DBard-video-photoThis week’s Spotlight is on Dr. Debbie Bard, a cosmologist at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC).

KIPAC members work in the Physics and Applied Physics Departments at Stanford University and at the SLAC National Accelerator Laboratory.

To handle the massive amounts of data involved in cosmological measurements, Debbie and her colleagues Matt Bellis (now an assistant professor at Siena College) and Mark Allen (now a data scientist at Chegg) teamed up to explore the potential of GPU computing and CUDA.

They concluded that “GPUs are a useful tool for cosmological calculations, allowing calculations to be made one or two orders of magnitude faster.” Their results were presented in a paper titled Cosmological Calculations on the GPU, which appeared earlier this year in Astronomy and Computing.

The following is an excerpt from our interview (you can read the complete Spotlight here). Continue reading


CUDA Spotlight: GPU-Accelerated Cancer Detection

Diego Rivera HeadshotThis week’s Spotlight is on Diego Rivera, a senior software engineer at Hologic, Inc. Hologic is a leading developer of medical imaging systems and surgical products, with an emphasis on serving the healthcare needs of women throughout the world.

The following is an excerpt from our interview (you can read the complete Spotlight here).

NVIDIA: Diego, tell us about your role at Hologic.
Diego: I’m part of a team that has been able to deliver great solutions for breast cancer detection. I’m a lead engineer in design and implementation of GPU-accelerated high-performance applications in the areas of tomosynthesis (3D-mammography), digital mammography, computer-aided design, and specimen radiography systems.

NVIDIA: What are some key challenges in your field?
Diego: With all the advantages of the digital era, film-based mammography has not yet vanished. Introducing a new modality such as tomosynthesis is challenging because the new capabilities have also led to new image management requirements (in areas such as storage, image reviewing and vendor viewing interoperability).

Despite the challenges, the tomosynthesis modality has proved more advantageous in early detection of cancer cases than the usage of standard digital images and has the potential of reducing the number of false positives. Our goal is to enable everyone to embrace this medium.

Hologic's Selenia Dimensions 3D Breast Tomosynthesis System
Hologic’s Selenia Dimensions 3D Breast Tomosynthesis System

NVIDIA: What role does GPU computing play in your work?
Diego: It has allowed us to process and reprocess images in real time. The impact of this is that there is no wait time added for screening and diagnostic results, which in turn minimizes the patient’s anxiety.

One of our objectives is to improve the patient experience by controlling dose and time in compression without sacrificing image quality. Real-time tomosynthesis would simply not be possible without GPUs. Our solution is deployed in a variety of hospitals and health care centers, including Massachusetts General Hospital and Bethesda Women’s Health Center. Continue reading


CUDA Spotlight: CUDA-Accelerated Adventures

This week’s Spotlight is on Cyrille Favreau and Christophe Favreau, brothers who leverage GPU computing in different ways, with equally compelling results.

Favreau-Brothers-PhotoCyrille, a technical architect by day, uses CUDA in his free time to pursue his interest in visualization technologies. His projects include building a real-time ray-tracing engine and molecule visualizer, and exploring fractal theory.

Christophe, a professional photographer and videographer, is passionate about sailing and nature. GPUs help him produce beautiful work as he travels the world.

Following is an excerpt from our interview (you can read the complete Spotlight here).

NVIDIA: Cyrille, when did you first start using GPUs?
Cyrille: As a technical architect, I am always looking for new solutions to problems. In 2009, I discovered CUDA and GPU computing and that took me to a whole new world. I could see that massively parallel architectures were about to shake the foundations of traditional programming.

NVIDIA: How have you used CUDA to pursue your passion for visualization projects?
Cyrille: My ray-tracing engine, called SoL-R for Speed of Light Ray-tracer, and my molecule visualizer initially started as simple learning projects to help me understand GPUs. But programming on GPUs became so exciting that I kept adding new functionality. I have a number of projects in the pipeline now, such as coupling SoL-R with the Oculus Rift virtual reality headset, and exploring fractal mathematics applied to financial data. Continue reading


CUDA Spotlight: GPU-Accelerated Genomics

This week’s Spotlight is on Dr. Knut Reinert. Knut is a professor at Freie Universität in Berlin, Germany, and chair of the Algorithms in Bioinformatics group in the Institute of Computer Science. Knut and his team focus on the development of novel algorithms and data structures for problems in the analysis of biomedical mass data. In particular, the group develops mathematical models for analyzing large genomic sequences and data derived from mass spectrometry experiments (for example, for detecting differential expression of proteins between normal and diseased samples). Previously, Knut was at Celera Genomics, where he worked on bioinformatics algorithms and software for the Human Genome Project, which assembled the very first human genome.

On Oct. 22, 2013, Knut will deliver a GTC Express Webinar presentation titled: Intro to SeqAn, an Open-Source C++ Template Library.

Following is an excerpt from our interview (you can read the complete Spotlight here).

Knut Reinert, Freie Univ. Berlin
Knut Reinert, Freie Univ. Berlin

NVIDIA: Knut, tell us about the SeqAn library.
Knut: Before setting up the Algorithmic Bioinformatics group at Freie Universität, I had been working for years at a U.S. company – Celera Genomics in Maryland – where I worked on the assembly of both the Drosophila (fruit fly) and human genomes. A central part of these projects was the development of large software packages containing algorithms for assembly and genome analysis developed by the Informatics Research team at Celera.

Although successful, the endeavor clearly showed the lack of available implementations in sequence analysis, even for standard tasks. Implementations of much needed algorithmic components were either not available, or hard to access in third-party, monolithic software products.

With this experience in mind, and being educated at the Max-Planck Institute for Computer Science in Saarbrücken (the home of very successful software libraries like LEDA and CGAL) I put the development of such a software library high on my research agenda. Continue reading


CUDA Spotlight: GPU-Accelerated Neuroscience

This week’s Spotlight is on Dr. Adam Gazzaley of UC San Francisco, where he is the founding director of the Neuroscience Imaging Center and an Associate Professor in Neurology, Physiology and Psychiatry. His work was featured in Nature in September 2013.

Below is an excerpt from our interview (you can read the complete Spotlight here):

Adam Gazzaley PortraitNVIDIA: Adam, how are you using GPU computing in your research?
Adam: We are working with a distributed team (UCSF, Stanford, UCSD and Eye Vapor) to CUDA-enable EEG (electroencephalography) processing to increase the fidelity of real-time brain activity recordings.

The goal is to more accurately represent the brain sources and neural networks, as well as to perform real-time artifact correction and mental state decoding. Not only will this improve the visualization capabilities, but more importantly, it will move EEG closer to being a real-time scientific tool.

Where CUDA and the GPU really excel is with very intense computations that use large matrices. We generate that type of data when we’re recording real-time brain activity across many electrodes.

EEG experiment at the Gazzaley Lab at UCSF, Sandler Neurosciences Center.
EEG experiment at the Gazzaley Lab at UCSF, Sandler Neurosciences Center.

NVIDIA: Describe the hardware/software platform currently in use by the development team.
Adam: We primarily use Python, MATLAB and C/C++. Our software is routinely executed on a range of platforms, including Linux (running Fedora 18), Windows 7, and Mac OS (Snow Leopard and Lion).Hardware we currently make use of includes NVIDIA Tesla K20s (for calculations), NVIDIA Quadro 5000s (for visualization) and two Intel Quad-core CPUs.

We use Microsoft Visual Studio 2010 x64 with CUDA 5.0, with the TCC driver for the Tesla GPUs. The Nvidia Nsight debugging tools are used with Visual Studio to optimize the code performance and get a better idea of what is happening ‘under the hood’ of the GPUs in real time. Continue reading


CUDA Spotlight: GPU-Accelerated FDTD Simulations for Applications in Photonics

Pierre-Wahl-Vrije-Uni-150x150This week’s Spotlight is on Pierre Wahl, a PhD student at Vrije Universiteit Brussel.

As a member of the Brussels Photonics Team (B-PHOT), he designs energy-efficient optical interconnects and works closely with the NVIDIA Application Lab at the Forschungszentrum Jülich.

Pierre used CUDA to develop B-CALM, a GPU-accelerated Finite Difference Time Domain (FDTD) simulator.

Below is an excerpt from our interview (you can read the complete Spotlight here):

NVIDIA: Pierre, what is B-CALM?
Pierre: B-CALM stands for Belgium-California Light Machine and is an FDTD simulator to numerically solve electromagnetic problems using the fundamental Maxwell’s equations.

FDTD is particularly useful for problems where the electromagnetic waves interact with objects that are the same order of magnitude in size as the wavelength. Those problems can be very computationally intensive, especially when simulating the interaction of electromagnetic waves with metals, which is at the core of my research.

NVIDIA: How have GPUs helped you in your research?
Pierre: I research avenues to make on-chip optical interconnects very energy efficient, because safely extracting the heat generated by regular metallic interconnects from chips gets increasingly difficult with the ever-increasing bandwidth requirements.

However, for optical interconnects to be competitive, optoelectronic components (modulators/photodetectors) have to have a very low electrical capacitance and must therefore be made very small. By using metals to guide and confine light (also referred to as plasmonics) optoelectronic components can have a size that is only a fraction of the wavelength and hence a very small electrical capacitance.

Plasmonic Photodetector
Representation of a plasmonic integrated photodetector. The waveguide is only 90nm wide.

Continue reading