Tag Archives: CUDA

CUDA Compatibility of NVIDIA Display / GPU Drivers

Last update: December 2nd, 2023

The last post about CUDA installation guide was for CUDA 9.2. We went through several types of CUDA installation methods, including the multiple-version CUDA installs. While the guide is still valid for CUDA 9.2, NVIDIA keeps releasing newer versions of CUDA. As a concrete example, when this article was first written in December 2018, the latest CUDA version was CUDA 10, taking the spotlight from CUDA 9.2. If we are about to upgrade to CUDA 10, how can we achieve so? Can we simply upgrade the CUDA toolkit without upgrading the display driver?

Handling CUDA Version Upgrade

CUDA version upgrade itself can be a misleading term because since CUDA 8.0, multiple versions of CUDA can be installed on the same machine. But let’s have a simple scenario where we already have CUDA 9.1 installed and only want to upgrade to CUDA 10. NVIDIA states that each version of CUDA toolkit requires certain minimum NVIDIA display version that should be satisfied. This means that when upgrading to newer version of CUDA toolkit, we need to make sure that the currently installed display driver version is newer/bigger than the minimum compatible display driver version. In other words, standard CUDA upgrade involves two upgrade processes: CUDA (toolkit) upgrade and driver upgrade. The following picture visualizes the standard upgrade process from CUDA 9.1 to CUDA 10: the toolkit is upgraded from 9.1 to 10 and the driver is upgraded from 390 to 410.

Continue reading

How to Build and Install The Latest TensorFlow without CUDA GPU and with Optimized CPU Performance on Ubuntu

In this post, we are about to accomplish something less common: building and installing TensorFlow with CPU support-only on Ubuntu server / desktop / laptop. We are targeting machines with older CPU, as for example those without Advanced Vector Extensions (AVX) support. This kind of setup can be a choice when we are not using TensorFlow to build a new AI model but instead only for obtaining the prediction (inference) served by a trained AI model. Compared with model training, the model inference is less computational intensive. Hence, instead of performing the computation using GPU acceleration, the task can be simply handled by CPU.

tl;dr The WHL file from TensorFlow CPU build is available for download from this Github repository.

Since we will build TensorFlow with CPU support only, the physical server will not need to be equipped with additional graphics card(s) to be mounted on the PCI slot(s). This is different with the case when we build TensorFlow with GPU support. For such case, we need to have at least one external (non built-in) graphics card that supports CUDA. Naturally, running TensorFlow with CPU pertains to be an economical approach to deep learning. Then how about the performance? Some benchmark results have shown that GPU performs better than CPU when performing deep learning tasks, especially for model training. However, this does not mean that TensorFlow CPU cannot be a feasible option. With proper CPU optimization, TensorFlow can exhibit improved performance that is comparable to its GPU counterpart. When cost is a more serious issue, let’s say we can only do the model training and inference in the cloud, leaning towards TensorFlow CPU can be a decision that also makes more sense from financial standpoint. Continue reading

Installing CUDA Toolkit 9.2 on Ubuntu 16.04: Fresh Install, Install by Removing Older Version, Install and Retain Old Version

In the previous post, we’ve proceeded with CUDA 9.1 installation on Ubuntu 16.04 LTS. As with other software that evolves, NVIDIA released CUDA 9.2 back in May. It is also safe to assume that CUDA 9.2 will not be final version. Newer version will may come soon or later and here we are left with the bogging question: “How can we upgrade safely without clobbering the currently working system?” Moreover, we may also wonder if there is a mechanism to rollback the change and live with current setup while recognizing that it’s not yet the time to upgrade.

This post will cover three scenarios of CUDA 9.2 installation: 1) fresh installation, 2) install to upgrade by removing old version, 3) install to upgrade and keep multiple versions. Continue reading

Guide: Installing Tensor Flow 1.8 with GPU Support against CUDA 9.1 and cuDNN 7.1 on Ubuntu 16.04

What is interesting in the deep learning ecosystem is the plentiful choices of deep learning frameworks. On the other side, of course there is another equation; more options equate to more confusion, especially in choosing the most appropriate framework for the entire gamut of the problems. At the end of the day, instead of using one, we may need to stick with multiple deep learning frameworks with each usage depending on the nature of the problem to solve.

TensorFlow is one of the popular (de facto most popular in terms of Github stars) deep learning frameworks. TensorFlow comes with excellent documentation. This also includes the documentation for installation. If you go to the official documentation page for installation, you will be provided with elaborate installation guide for multiple OS platforms. Then why this post?

The latest version of TensorFlow with GPU support (version 1.8 at the time this post is published) is built against CUDA 9.0. However, NVIDIA has released CUDA 9.1 and there is possibility of newer version release in the near future. Given that TensorFlow is lagging behind the CUDA GA version, the publicly released TensorFlow bundle cannot immediately work on the system having only the latest CUDA version installed. A remedy for this is by installing from source, which can be non-trivial especially for those who are not so familiar with the source build mechanism.

The final system setup after completing the installation steps explained in the posts will be as follows.

ItemValue
OSUbuntu 16.04
NVIDIA driver version390.48
CUDA version9.1
cuDNN version7.1.3
NCCL version2.1.15
Python version2.7.12
Python install methodvirtualenv
TensorFlow version1.8.0

Note that the components will be updated in the future. This implies version upgrade for the components. It is expected that this post will still be valid even after version upgrade. Under the circumstances where this post becomes invalid, the content will be updated or another post will be written. Yet, this would be realized with sufficient comments or feedback regarding existing content. Continue reading

How to Properly Install NVIDIA Graphics Driver on Ubuntu 16.04

In the recent posts, we have been going through the installation of deep learning framework like Caffe2 and its dependencies, such as CUDA or cuDNN. In this post, we will go few steps back to the very basic prerequisite of setting up a GPU-powered deep learning system: display driver installation. We will specifically focus on NVIDIA display driver installation due to the pervasiveness and robustness of NVIDIA GPUs as deep learning infrastructure.

Key Terminologies

Before proceeding to the installation, let’s discuss some key terminologies related with the use of NVIDIA GPUs as the computing infrastructure in a deep learning system.

GPU: Graphical / Graphics Processing Unit. A unit of computation, in a form of a small chip on the graphics card, traditionally intended to perform rapid computation for image / graphics rendering and display purpose. A graphics card can contain one or more GPUs while one GPU can be built of hundreds or thousands of cores.

CUDA: A parallel programming model and the implementation as a computing platform developed by NVIDIA to perform computation on the GPUs. CUDA was designed to speed up computation by harnessing the power of the parallel computation utilizing hundreds or thousands of the GPU cores.

CUDA-enabled GPUs: NVIDIA GPUs that support CUDA programming model and implementation

CUDA compute capability: A number that refers to the general specifications and available features especially in terms of parallel computing methods of a CUDA-enabled GPU. The full list of the available features in each compute capability can be seen here.

Note on CUDA compute capability and deep learning:
It is important to note that if you plan to use an NVIDIA GPU for deep learning purpose, you need to make sure that the compute capability of the GPU is at least 3.0 (Kepler architecture). Continue reading