Guide: Installing Tensor Flow 1.8 with GPU Support against CUDA 9.1 and cuDNN 7.1 on Ubuntu 16.04

What is interesting in the deep learning ecosystem is the plentiful choices of deep learning frameworks. On the other side, of course there is another equation; more options equate to more confusion, especially in choosing the most appropriate framework for the entire gamut of the problems. At the end of the day, instead of using one, we may need to stick with multiple deep learning frameworks with each usage depending on the nature of the problem to solve.

TensorFlow is one of the popular (de facto most popular in terms of Github stars) deep learning frameworks. TensorFlow comes with excellent documentation. This also includes the documentation for installation. If you go to the official documentation page for installation, you will be provided with elaborate installation guide for multiple OS platforms. Then why this post?

The latest version of TensorFlow with GPU support (version 1.8 at the time this post is published) is built against CUDA 9.0. However, NVIDIA has released CUDA 9.1 and there is possibility of newer version release in the near future. Given that TensorFlow is lagging behind the CUDA GA version, the publicly released TensorFlow bundle cannot immediately work on the system having only the latest CUDA version installed. A remedy for this is by installing from source, which can be non-trivial especially for those who are not so familiar with the source build mechanism.

The final system setup after completing the installation steps explained in the posts will be as follows.

ItemValue
OSUbuntu 16.04
NVIDIA driver version390.48
CUDA version9.1
cuDNN version7.1.3
NCCL version2.1.15
Python version2.7.12
Python install methodvirtualenv
TensorFlow version1.8.0

Note that the components will be updated in the future. This implies version upgrade for the components. It is expected that this post will still be valid even after version upgrade. Under the circumstances where this post becomes invalid, the content will be updated or another post will be written. Yet, this would be realized with sufficient comments or feedback regarding existing content.

Python 3 users may also wonder if the steps can also be replicated for Python 3. The answer is yes but not immediately. It is recommended to proceed with the installation in another virtualenv that is used specifically by Python 3 and perform the installation with Python 3 equivalent of the command.

I am launching a campaign for Amikelive System Architect, a course for advanced app, web, and system development. You can donate and enjoy the perk exclusively offered for the backers. Visit this page to become a backer.

Pre Installation

Prior to installation, we will perform some system check for the prerequisites.

Pre 0: Update apt and install the latest package

$ sudo apt-get -y update && sudo apt-get -y upgrade

Pre 1: Check Python version
$ python -V

Output (expected): Python 2.7.12 (or other 2.7 version)

Pre 2: Check GCC version
$ gcc --version

Output (expected): 5.4.0 (or other 5.x version)

Pre 3: Check NVIDIA driver install status
$ nvidia-smi

Output (expected): GPU information

Troubleshooting:

bash: nvidia-smi: command not found

Meaning: Graphics driver has not been installed
Resolve: Install NVIDIA graphics driver. Refer to this article for the installation steps.

Pre 4: Check CUDA install status

$ nvcc --version

Output (expected): “… Cuda compilation tools, release 9.1 …”

Troubleshooting:

bash: nvcc: command not found

Meaning: CUDA toolkit has not been installed
Resolve: Install CUDA toolkit. Refer to this article for the installation steps.

Pre 5: Check cuDNN install status

$ locate libcudnn.so

Output (expected): /usr/lib/x86_64-linux-gnu/libcudnn.so

Troubleshooting:

Empty result returned

Meaning: cuDNN has not been installed.
Resolve: Install cuDNN. Refer to this article for the installation steps

Pre 6: Check NCCL install status

$ locate libnccl.so

Output (expected): /usr/local/nccl-2.1/lib/libnccl.so

Troubleshooting:

Empty result returned

Meaning: NCCL has not been installed.
Resolve: Install NCCL. Refer to this article for the installation steps

Please note that even hough NCCL is optional in a TensorFlow installation, we set it as a prerequisite in order to harness GPU parallelism in future use.

Pre 7: Check CUDA profiling tools install status

$ dpkg --get-selections | grep cuda-command-line-tools

Output (expected): “cuda-command-line-tools-9-1 install”

Troubleshooting:

Empty result returned or status is not “install”

Meaning: CUDA profiling tools library has not been installed.
Resolve: Install the library

Steps to install CUDA profiling tools
7.1 Install CUDA profiling tools using apt

$ sudo apt-get install cuda-command-line-tools-9-1

7.2 Add the profiling tools library to LD_LIBRARY_PATH
$ vi ~/.profile

...
LD_LIBRARY_PATH=“/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-9.1/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}"
...

Installation Steps

After confirming all checks are passed in the pre installation phase, we will proceed with the TensorFlow installation. We will install the virtualenv at ~/virtualenv/tensorflow.

Step 1: Set locale to UTF-8

$ export LC_ALL="en_US.UTF-8"
$ export LC_CTYPE="en_US.UTF-8"
$ sudo dpkg-reconfigure locales

Step 2: Install pip and virtualenv for Python 2

$ sudo apt-get install python-pip python-dev python-virtualenv

Step 3: Create virtualenv environment for Python 2 (Virtualenv location: ~/virtualenv/tensorflow)

$ mkdir -p ~/virtualenv/tensorflow
$ virtualenv --system-site-packages ~/virtualenv/tensorflow

Step 4: Activate the virtualenv environment

$ source ~/virtualenv/tensorflow/bin/activate

Verify the prompt is changed to:

(tensorflow) $

Step 5: (virtualenv) Ensure pip >= 8.1 is installed
(tensorflow) $ easy_install -U pip

Step 6: (virtualenv) Deactivate the virtualenv
(tensorflow) $ deactivate

Step 7: Install bazel to build TensorFlow from source

7.1 Install Java JDK 8 (Open JDK)
$ sudo apt-get install openjdk-8-jdk

7.2 Add bazel private repository into source repository list

$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -

7.3 Install the latest version of bezel
$ sudo apt-get update && sudo apt-get install bazel

Step 8: Install TensorFlow Python dependencies
$ sudo apt-get install python-numpy python-dev python-pip python-wheel

Step 9: Build TensorFlow from source

9.1 Download the latest stable release of TensorFlow (release 1.8.0). We set target directory to ~/installers/tensorflow

$ mkdir -p ~/installers/tensorflow && cd ~/installers/tensorflow
$ wget https://github.com/tensorflow/tensorflow/archive/v1.8.0.zip

9.2 Unzip the installer
$ unzip v1.8.0

9.3 Go the inflated TensorFlow source directory
$ cd tensorflow-1.8.0

9.4 Configure the build file
$ ./configure

Sample configuration (note that other than CUDA configuration, your configuration may vary):

Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages] 
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: Y
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: Y
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: Y
Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: Y
Do you wish to build TensorFlow with XLA JIT support? [y/N]: N
Do you wish to build TensorFlow with GDR support? [y/N]: N
Do you wish to build TensorFlow with VERBS support? [y/N]: N
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
Do you wish to build TensorFlow with CUDA support? [y/N]: Y
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 9.1
Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.1
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.3
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.1]:/usr/lib/x86_64-linux-gnu
Do you wish to build TensorFlow with TensorRT support? [y/N]: N
Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:2.1.15
Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.1]: /usr/local/nccl-2.1
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.7,3.7]3.7
Do you want to use clang as CUDA compiler? [y/N]: N
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: /usr/bin/gcc
Do you wish to build TensorFlow with MPI support? [y/N]: N
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: -march=native
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N

9.5 Build TensorFlow with GPU support
Since the GCC version is >= 5.0

$ bazel build --config=opt --config=cuda --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package

For GCC version 4 and older
$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

Troubleshooting:

ImportError: No module named enum

Meaning: The enum module is not available until Python 3.4.
Resolve: Manually install enum module

$ pip install --upgrade enum34

ImportError: No module named mock

Meaning: The mock module is not installed
Resolve: Install mock module

$ pip install mock

“no such package @nasm…All mirrors are down: ”

Meaning: One of or all the nasm package mirrors are down. This is a known isssue.
Resolve: Add a new mirror for nasm

$ vi tensorflow/workspace.bzl

...
  tf_http_archive(
      name = "nasm",
      urls = [
          "https://mirror.bazel.build/www.nasm.us/pub/nasm/releasebuilds/2.12.02/nasm-2.12.02.tar.bz2”,
          "http://www.nasm.us/pub/nasm/releasebuilds/2.12.02/nasm-2.12.02.tar.bz2",
          "http://pkgs.fedoraproject.org/repo/pkgs/nasm/nasm-2.12.02.tar.bz2/d15843c3fb7db39af80571ee27ec6fad/nasm-2.12.02.tar.bz2",
      ],
      sha256 = "00b0891c678c065446ca59bcee64719d0096d54d6886e6e472aeee2e170ae324",
      strip_prefix = "nasm-2.12.02",
      build_file = clean_dep("//third_party:nasm.BUILD"),
  )
...

9.6 Create the .whl file from the bazel build (We create a directory named tensorflow-pkg)

$ mkdir tensorflow-pkg
$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package tensorflow-pkg

9.7 Activate the virtualenv
$ source ~/virtualenv/tensorflow/bin/activate

9.8 (virtualenv) Install the .whl file
– Obtain the .whl file name
(tensorflow) $ cd tensorflow-pkg && ls -al

– Install the .whl file via pip (example)
(tensorflow) $ pip install tensorflow-1.8.0-cp27-cp27mu-linux_x86_64.whl

9.9 (virtualenv) Verify the installation
(tensorflow) $ python

import tensorflow as tf
hello = tf.string_join([‘Hello’,'TensorFlow!’],’ ')
sess = tf.Session()
print(sess.run(hello))

Output after the last line:
Hello TensorFlow!

The installation is now complete. We can now use TensorFlow in the system. Don’t forget that since TensorFlow is running in a virtualenv, we need to make sure that the virtualenv is activated when running a TensorFlow program.

Closing Remark

Making TensorFlow work with CUDA 9.1 should not be too daunting. If you have problem with your TensorFlow – CUDA 9.1 installation, or probably tips and trick for the installation, you can simply write in the comment section.

0 Responses to “Guide: Installing Tensor Flow 1.8 with GPU Support against CUDA 9.1 and cuDNN 7.1 on Ubuntu 16.04”


  • No Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.