DIGITS: Deep Learning GPU Training System

The hottest area in machine learning today is Deep Learning, which uses Deep Neural Networks (DNNs) to teach computers to detect recognizable concepts in data. Researchers and industry practitioners are using DNNs in image and video classification, computer vision, speech recognition, natural language processing, and audio recognition, among other applications.

The success of DNNs has been greatly accelerated by using GPUs, which have become the platform of choice for training these large, complex DNNs, reducing training time from months to only a few days. The major deep learning software frameworks have incorporated GPU acceleration, including Caffe, Torch7, Theano, and CUDA-Convnet2. Because of the increasing importance of DNNs in both industry and academia and the key role of GPUs, last year NVIDIA introduced cuDNN, a library of primitives for deep neural networks.

Today at the GPU Technology Conference, NVIDIA CEO and co-founder Jen-Hsun Huang introduced DIGITS, the first interactive Deep Learning GPU Training System. DIGITS is a new system for developing, training and visualizing deep neural networks. It puts the power of deep learning into an intuitive browser-based interface, so that data scientists and researchers can quickly design the best DNN for their data using real-time network behavior visualization. DIGITS is open-source software, available on GitHub, so developers can extend or customize it or contribute to the project.

Figure 1: DIGITS console
Figure 1: DIGITS console

Deep Learning is an approach to training and employing multi-layered artificial neural networks to assist in or complete a task without human intervention. DNNs for image classification typically use a combination of convolutional neural network (CNN) layers and fully connected layers made up of artificial neurons tiled so that they respond to overlapping regions of the visual field.

Figure 2: Generic network with two hidden layers
Figure 2: Generic network with two hidden layers

Figure 2 shows a generic representation of a network with two hidden layers showing the interactions between layers. Feature processing occurs in each layer, represented as a series of neurons. Links between the layers communicate responses, akin to synapses. The general approach of processing data through multiple layers, performing feature abstraction at each layer, is analogous to how the brain processes information. The number of layers and their parameters can vary depending on the data and categories. Some deep neural networks are comprised of more than ten layers with more than a billion parameters [1][2].

Advantages of DIGITS

DIGITS provides a user-friendly interface for training and classification that can be used to train DNNs with a few clicks. It runs as a web application accessed through a web browser. Figure 1 shows the typical user work flow in DIGITS. The first screen is the main console window, from which you can create databases from images and prepare them for training. Once you have a database, you can configure your network model and begin training.

The DIGITS interface provides tools for DNN optimization. The main console lists existing databases and previously trained network models available on the machine, as well as the training activities in progress. You can track adjustments you have made to network configuration and maximize accuracy by varying parameters such as bias, neural activation functions, pooling windows, and layers.

DIGITS makes it easy to visualize networks and quickly compare their accuracies. When you select a model, DIGITS shows the status of the training exercise and its accuracy, and provides the option to load and classify images while the network is training or after training completes.

Because DIGITS runs a web server, it is easy for a team of users to share datasets and network configurations, and to test and share results. Within an organization several people may use the same data set for training networks with different configurations.

DIGITS integrates the popular Caffe deep learning framework from the Berkeley Learning and Vision Center, and supports GPU acceleration using cuDNN to massively reduce training time.

Installing DIGITS

Installing and using DIGITS is easy. Visit the digits home page, register and download the installer. Or, if you prefer, get the (Python-based) source code from Github.


Once everything is installed, launch DIGITS from its install directory using this command line:

python digits-devserver

Then, if DIGITS is installed on your local machine, load the DIGITS web interface in your web browser by entering the URL http://localhost:5000. If it is installed on a server you can replace localhost with the server IP address or hostname.

When you first open the DIGITS main console it will not have any databases, as shown in Figure 2. Creating a database is easy. Select “Images” under “New Dataset” in the left pane. You have two options for creating a database from images: either add the path to the “Training Image” text box and let DIGITS create the training and validation sets, or insert paths to both sets using the “Upload Text Files” tab. Once the database paths have been defined use the “Create” button to generate the database.

Figure 3. Creating a database in DIGITS
Figure 3. Creating a database in DIGITS

After creating a database you should define the network parameters for training. Go back to the main console and select any previously created dataset under “New Model”. In Figure 4 we have selected “Database1” from the two available datasets. Many of the features and functions available in the Caffe framework are exposed in the “Solver Options” pane on the left side. All network functions are available as well.

You have three options for defining a network: selecting a preconfigured (“standard”) network, a previous network, or a custom network, as shown in Figure 4 (middle). LeNet by Yann LeCunn and AlexNet from Alex Krizhevsky are the two preconfigured networks currently available. You can also modify these networks by selecting the customize link next to the network. This lets you modify any of the network parameters, add layers, change the bias, or modify the pooling windows.

Figure 4: Configuring a neural network in DIGITS
Figure 4: Configuring a neural network in DIGITS

When you customize the network you can visualize it by selecting the “Visualize” button in the upper-right corner of the network editing box. This is a handy network configuration checking tool that helps you visualize your network layout and quickly tells you if you have the wrong inputs into certain layers or forgot to put in a pooling function.

Figure 5: DIGITS Training Results
Figure 5: DIGITS Training Results (click for full size)

After the configuration is complete you can start training! Figure 5 shows training results for a two-class image set using the the example caffenet network configuration. The top of the training window has links to the network configuration files, information on the data set used, and training status. If an error occurs during training it is posted in this area. You can download the network configuration files from this window to quickly check parameters during training.

During training DIGITS plots the accuracy and loss values in the top chart. This is handy because it provides real-time visualization into how well or poorly the network is learning. If the accuracy is not increasing or is not as expected you can abort training and/or delete it using the buttons in the upper-right corner. The learning rate as a function of the training epoch is plotted in the lower plot.

You can classify images with the network using the interface below the plots. Like Caffe, DIGITS takes snapshots during training; you can use the most recent (or any previous) snapshot to classify images. You can select a snapshot with the “Select Model” drop-down menu and choosing your desired Epoch. Then simply input the URL of an online image or upload one from your local computer, and click “Test One Image”. You can also classify multiple images at once by uploading a text file with a list of URLs or images located on the host machine.

How I use DIGITS

Let’s demonstrate DIGITS with a look at a test network and how I used it to put it through its paces. I chose a relatively simple task: identifying images of ships. I started with two categories, “ship” and “no ship”. I obtained approximately 34,000 images from ImageNet via the URL lists provided on the site and by manually searching on USGS. ImageNet images are pre-tagged, which made it easy to categorize the images I found there. I manually tiled and tagged all of the USGS images. My ship category comprises a variety of different marine vehicles including cruise, cargo, weather, passenger and container ships; oil tankers, destroyers, and small boats. My non-ship category includes images of beaches, open water, buildings, sharks, whales, and other non-ship objects.

Figure 6: My DIGITS console
Figure 6: My DIGITS console

I was a Caffe user, and DIGITS was immediately useful thanks to the user-friendly interface and the access it provides to Caffe features.

I have multiple datasets built from the same image data. One set comprises my original images and the others are modified versions. In the modified versions I mirrored all of the training images to inflate the dataset and try to account for variation in object orientations. I like that DIGITS displays all of my previously created datasets in the Main Console, and also in the network window, making it easy to select the one I want to use in each training exercise. It’s also easy to modify my networks, either by downloading a network file from a previous training run and pasting the modified version into the custom network box, or by loading one of my previous networks and customizing it.

DIGITS is great for sharing data and results. I live in LA and work with a team in Texas and the Washington DC area. We run DIGITS on an internal server that everyone can access. This allows us to quickly check and track the iterations on our network configurations and see how changes affect network performance. Anyone with access can also configure their own network in DIGITS and perform CNN training on this host. Their activities display in the main console too. To demonstrate, Figure 6 shows an image of my current console. Three datasets I currently have stored as well as complete and current models are posted.

DIGITS makes it easy for me to visualize my network when I classify an image. When classifying a single image it displays the activations at each layer as well as the kernels. I find it hard to mentally visualize a network’s response, but this feature helps by concisely showing all of the layer and activation information. Figure 7 shows an example of my test network correctly classifying an old photo of a military ship with 100% confidence, and Figure 8 shows the results of classifying a picture of me. It shows that my two-class ship/no-ship neural network is 100% sure that I am not a ship!

Figure 7: Correctly Classifying an image of a ship.
Figure 7: Correctly Classifying an image of a ship.
Figure 8: Classifying myself
Figure 8: Classifying myself


[1] Krizhevsky, A., Sutskever, I. and Hinton, G. E., ImageNet Classification with Deep Convolutional Neural Networks. NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada.
[2] Szegedy, C. et al., Going Deeper with Convolutions. September 12, 2014. http://arxiv.org/pdf/1409.4842v1.pdf

  • JHG

    awesome work

  • Alten Li

    brilliant!Good job!

  • Daniel Orf

    Any idea where we can find a recording of today’s DIGITS webinar?

  • Michael

    Does anyone know where the AMI for digits is released?

  • Michael

    Also, I don’t see where in digits the test accuracy for models is reported?

    • Allison Gray

      Hi Michael, the accuracy of the network is posted in real time while training. It will look like what is shown in Figure 5 above.

  • Is it possible to see the training progress graphs of an externally created network? I mean, it looks like DIGITS is only for image data networks, but I want to use Caffe for numerical data and I want to use DIGITS just to monitor the learning rate, etc.

    • Allison Gray

      Right, DIGITS is only for image data at this time. Currently you cannot use the visualization tools in DIGITS for other types of data.

      • Thank you Allison, I tried it (installed it) and discovered it’s not possible.
        I think it would be great to be able to use DIGITS for that kind of use cases. I was able to start creating a custom network and see the visualization of the graph, but nothing else.

      • Stephane Zieba

        I have almost the same question. I am running Ubuntu with a virtual
        machine (i.e. low memory and no GPU support) and I would like to use
        DIGITS only for classification of images by loading the
        network that I trained with Caffe under Windows. Do you know if it’s possible? Thanks in advance for your help.

        • Allison Gray

          DIGITS creates the same network files that caffe does and it is easy to load a pretrained network.

          The last time I tired to just paste in externally created network files, it didn’t work. I was missing some of the configuration files that DIGITS creates and uses to load pretrained networks. I did not look into recreating mock versions of the missing files. Outside of this brief attempt some time ago, I haven’t spent much time on this. Things may be different now. Please post this question on the DIGITS Users Google forum and someone will get back to you.

          • Stephane Zieba

            Thanks for the the reply! I will go on trying DIGITS and post on the forum. Regards. Stephane

          • Allison Gray

            Someone else asked a similar question on github. You can find instructions for loading a pretrained network and performing classification on one or many here- https://github.com/NVIDIA/DIGITS/issues/49#issuecomment-99533854

  • Vladimir Mitrovic

    Any news on EC2 AMI with DIGITS pre-installed ?

    • Allison Gray

      Unfortunately, it has not posted.

      However, you could download the DIGITS installer (http://developer.nvidia.com/digits) onto a AWS AMI to get started now. You will need a g2.2xlarge instance with Ubuntu 14.04.

      Once the DIGITS AMI is available I will post information on it.

      • Vladimir Mitrovic


        • blabla54

          Any update?

  • GS099

    So why isn’t there any real deep learning work being done for Windows? Most all the work is pointing toward Linux. What is the reasoning behind this trend?

    • Allison Gray

      I have noticed a similar trend when I peruse deep learning sites as well. But Windows can be used for deep learning work. You can build and use Theano. I have it running on my Windows machine. You can use Caffe on Windows too. Here is a really helpful blog post about building it with Visual Studio 2013, https://initialneil.wordpress.com/2015/01/11/build-caffe-in-windows-with-visual-studio-2013-cuda-6-5-opencv-2-4-9/

      • GS099

        I have been working with Ubuntu 14.04 …it’s not user friendly and reminds me of DOS

      • GS099

        Allison…it also seems like GPUs are used for processing but no one talks about why other than they are faster than CPUs but faster for what? Processing images? I’d be only interested in image processing for learning purposes not DL projects.
        What kind of DL work are you doing on Windows OS?

        • If you need to know what GPUs are good at, I recommend first reading this page and following the links that interest you: https://developer.nvidia.com/about-cuda You might also just click around on this blog and look at the wide variety of articles covering many different applications of GPUs.

          • GS099

            I’ve read everything I can get my hands on concerning Nvidia, Digits Devbox, GPUs and deep learning including blogs. There is NOT a single source of truth as there are as many opinions on the subject as there are blogs on the topic. Thanks for the link Mark…I am familiar with CUDA as well…all represent graphical representations of models. One does not need to work with pixel processing to do ML and in fact this kind of study belongs to modeling and not ML I feel.

  • GS099

    And Allison is a cutie…

  • Riley Lee

    Hi, there. I’m trying to build a detector using trained model. I downloaded the mean.binary, deploy.prototype and .caffemodel file of the last epoch and feed them into pycaffe. However, I’m not able to get the same high validation accuracy from net.forward_all function (99.6% from Digits, 89.6% from Python). Before the classification, I subtract the mean image from the input image. Has anyone done this before? Is there any step I forget? I really appreciate your help!!!
    BTW, I used the trained model from pycaffe and repeat the same validation procedure. The accuracy of pycaffe trained model matched with those displayed in the trained interface.

    • Allison Gray

      Did you try using the classification example under DIGITS_ROOT/examples/classification?

  • Deepak Nath

    Can you just use digits for monitoring GPU usage?

  • Hi we have received our DIGITS on 07/13 but it doesn’t plot the accuracy and loss values in the top chart neither the GPU usage and other stats. Would you please advice?

    • Allison Gray

      Where did you download DIGITS? DIGITS 1 required the internet to plot and did not display GPU usage. Are you using this version by chance?

      • We just received the hardware and we are using the one already installed on it. It must be DIGITS 1 I guess. The machine was not connected to internet while we ran the first test, so as you mentioned this might be the problem.

        • Allison Gray

          Your DIGITS DevBox was likely delivered with DIGITS 1 installed. Let me know if the plotting does not work when you are connected to the internet. If you want to upgrade to DIGITS 2, you can download it here, https://developer.nvidia.com/digits. Let me know if you have any issues. If you want to keep DIGITS 1 running, you can run DIGITS 2 on a different port, ./digits-devserver -p 5001

          • It woks after we connect it to the internet. Thanks for the help.
            We’re gonna upgrade to DIGITS 2 later this week. Does DIGITS 1 include multiple GPU processing capability?

          • Allison Gray

            DIGITS 1 support multiple GPUs, you can perform one training per GPU. With DIGITS 2, you can perform a single training on more than one GPU.

  • Rakesh Gohel

    Hi, I would like to checkout the images after dataset is created. could you help me how to do that?

    • Allison Gray

      Sorry I don’t have a script to share with you. It looks like someone asked a similar question on the Caffe-users google group – https://groups.google.com/forum/#!searchin/caffe-users/extract$20an$20image$20lmdb/caffe-users/bPmbc0H7Tfc/wvLwNfJnLe8J. One of the responses includes an example script to help you get started. For more information on lmdb calls, I recommend looking at their documentation -http://lmdb.readthedocs.org/en/release/

      • Rakesh Gohel

        Thank you for the reply, Allison. Those links are helpful, I wonder if this can be achieved directly through web interface. It’s an instant help.

  • Fine-tuning on DIGITS seems not working.

    I renamed the layers that are different (let say in Alexnet you rename the last FC layer to ‘fc8-new’). I didn’t observe any difference in convergence rate and accuracy compare to when there is no fine-tuning. Would you please advice?

    • Allison Gray

      I have not had any issues with fine-tuning in DIGITS. When I fine-tune, I am showing the trained network new data, is this what you are doing? Also, the few times I have done this I found the accuracy of my fine tuned network to be be comparable to what is was before. The main difference is that it is now tuned for accurate classification with new categories. Can you tell me a little more about what you are trying to do?

  • Luiz Eduardo S. Oliveira

    That is a very nice project! It works great. The only problem I had was when I tried to classify many images. I uploaded a list of images but the system crashes saying that “Input must have 4 axes, corresponding to (num, channels, height, width)”. I’trying to classify the same images I used for training.

    • Allison Gray

      It looks like someone else had a similar problem -https://github.com/NVIDIA/DIGITS/issues/234

      What version of DIGITS are you using?

  • Hi,

    I’m trying to train a network which requires about 24GB memory using all the 4 GPUs. However I was not able to get digits working for that. It tries to use over 12GB memory from the first GPU in beginning while others are almost Idle. This causes the training process to crash because first GPU run out of memory. How can I handle this issue?

  • Vijay Shah

    Hi Allision, This is Vijay Shah from High Tech Security Services, India.We have installed CCTV cameras on highways for surveillance purpose. We want classification of Vehicles like Motorcycle, Car, Bus, Truck etc. from Crowded LIVE Traffic Video.
    Kindly let us know how we can achive this using nvidia Deep Learning Vision.

    • Sasikumar R

      Hi Vijay, saw this message only now. Do you still require assistance on the same? If yes, please let me know.

  • Surya Shah


    I want to use Boltzmann machine for image classification. How can it be done ?


    Hi Sasha, nice work!
    Caffe is limited in parallellism when run from the Github repo: https://github.com/BVLC/caffe/blob/master/docs/multigpu.md
    How is that with DIGITS?

  • lavanya seetharaman

    Hi, Is there any way I can use these results in smartphones.

    • Ingrid Fitzgerald

      Hi lavanya,

      Did you find any the way on how to run the results in smartphones?

      If you did please share.

      • lavanya seetharaman

        As of now we dont have such a powerful smartphones which really supports GPU . But in mere future we will get that. ACTUALLY I setup the server (workstation) and made my smartphone interface wil be client.

      • Dago

        Hi, maybe creating a web service that sends the picture to the computer and then returns an answer to the phone

  • Surya Pandian

    Is it possible to edit the model after the model has been generated by providing different dataset?