Tag Archive for 'GPU'

How to Properly Install NVIDIA Graphics Driver on Ubuntu 16.04

In the recent posts, we have been going through the installation of deep learning framework like Caffe2 and its dependencies, such as CUDA or cuDNN. In this post, we will go few steps back to the very basic prerequisite of setting up a GPU-powered deep learning system: display driver installation. We will specifically focus on NVIDIA display driver installation due to the pervasiveness and robustness of NVIDIA GPUs as deep learning infrastructure.

Key Terminologies

Before proceeding to the installation, let’s discuss some key terminologies related with the use of NVIDIA GPUs as the computing infrastructure in a deep learning system.

GPU: Graphical / Graphics Processing Unit. A unit of computation, in a form of a small chip on the graphics card, traditionally intended to perform rapid computation for image / graphics rendering and display purpose. A graphics card can contain one or more GPUs while one GPU can be built of hundreds or thousands of cores.

CUDA: A parallel programming model and the implementation as a computing platform developed by NVIDIA to perform computation on the GPUs. CUDA was designed to speed up computation by harnessing the power of the parallel computation utilizing hundreds or thousands of the GPU cores.

CUDA-enabled GPUs: NVIDIA GPUs that support CUDA programming model and implementation

CUDA compute capability: A number that refers to the general specifications and available features especially in terms of parallel computing methods of a CUDA-enabled GPU. The full list of the available features in each compute capability can be seen here.

Note on CUDA compute capability and deep learning:
It is important to note that if you plan to use an NVIDIA GPU for deep learning purpose, you need to make sure that the compute capability of the GPU is at least 3.0 (Kepler architecture). Continue reading

List of NVIDIA Desktop Graphics Card Models for Building Deep Learning AI System

If you are doing deep learning AI research and/or development with GPUs, big chance you will be using graphics card from NVIDIA to perform the deep learning tasks. A vantage point with GPU computing is related with the fact that the graphics card occupies the PCI / PCIe slot. From the frugality point of view, it may be a brilliant idea to scavenge unused graphics cards from the fading PC world and line them up on another unused desktop motherboard to create a somewhat powerful compute node for AI tasks. Maybe not.

With the increasing popularity of container-based deployment, a system architect may consider creating several containers with each running different AI tasks. This means that that the underlying GPU resources should then be shared among the containers. NVIDIA provides a utility called NVIDIA Docker or nvidia-docker2 that enables the containerization of a GPU-accelerated machine. As the name suggests, the utility targets Docker container type. Continue reading

Guide: Installing Cuda Toolkit 9.1 on Ubuntu 16.04

With advances in GPU technologies, performing complex computation is not an exclusive feat of multicore CPUs anymore. It is not uncommon to perform computation for linear algebra, image and video processing, machine learning (especially deep learning), graph analytics, and so forth on GPU.

NVIDIA graphic cards have gained popularity among machine learning researchers and practitioners as the base hardware for GPU computing. To harness the GPU power, NVIDIA develops and provides CUDA toolkit that can be used as the development environment and libraries for GPU-accelerated applications.

If you are using Ubuntu 16.04 (Xenial) and want to install the recent release of CUDA toolkit (version 9.1), this post may help. The official installation guide is available at the NVIDIA website and can be referenced when following the steps outlined in this post. Continue reading