Friday, August 14, 2015

Hardware for Deep Learning

Figure 3 shows how using multiple GPUs can reduce training time. The graph plots the speedup for training GoogleNet on 1, 2 and 4 GPUS with a batch size of 128. These results were obtained with a DIGITS DevBox using GeForce TITAN X GPUs and the Caffe framework.

Training Speedup Achieved with DIGITS on Multiple GeForce TITAN X GPUs in a DIGITS DevBox. These results were obtained with the Caffe framework and a batch size of 128.

Specialized system
The DIGITS™ DevBox Access Program is available for purchase to qualified deep learning researchers in the United States and will be priced at $15,000. Lead time is 8-10 weeks from payment confirmation.

Amazon EC2 - GPU G2:

GPU Cloud Computing


1 comment:

  1. May 25, 2016