Monthly Archives: May 2018

How to Install Jupyter Notebook as Service for Tensor Flow and Deep Learning on Ubuntu 16.04

When developing a deep-learning system, especially during the modeling stage, a lot of trials and errors can be involved in evolving the codebase. The easy remedy to reduce errors will be by using a robust IDE that provides productivity-boosting features such as code completion, method definition, codestyle suggestion, advanced debugging, user-friendly UI, and so forth.

Another element for better development experience is interactivity. Instead of writing the whole source code and evaluate everything, it is arguably more productive to write the code in small steps, with one line as the smallest unit, and evaluate the code up to the last line written. This kind of mechanism is possible for scripting language where the codes are interpreted. Node JS for example has a feature named Read-Eval-Print Loop (REPL) that enables someone to write Javascript code with Node JS and has it evaluated on the go. Deep learning algorithms and systems are often developed in Python, another interpreted language. Similar initiative also exists for Python in the forms of interactive shell and “notebook”. A notebook is an interactive environment, normally with web GUI support, where someone can combine code execution, text, rich media, charting and other types of data visualization. A popular notebook for Python is Jupyter Notebook, which was formerly known as IPython Notebook.

In this article, we will go into more details about Jupyter Notebook installation and configuration on Ubuntu 16.04. However, it’s important to note that the configuration depends on some pre-requisites. This article is the continuation of the previous article about TensorFlow installation. Please make sure you have read the article to understand the pre-requisites, otherwise some steps explained in this article may not work. Continue reading

Guide: Installing Tensor Flow 1.8 with GPU Support against CUDA 9.1 and cuDNN 7.1 on Ubuntu 16.04

What is interesting in the deep learning ecosystem is the plentiful choices of deep learning frameworks. On the other side, of course there is another equation; more options equate to more confusion, especially in choosing the most appropriate framework for the entire gamut of the problems. At the end of the day, instead of using one, we may need to stick with multiple deep learning frameworks with each usage depending on the nature of the problem to solve.

TensorFlow is one of the popular (de facto most popular in terms of Github stars) deep learning frameworks. TensorFlow comes with excellent documentation. This also includes the documentation for installation. If you go to the official documentation page for installation, you will be provided with elaborate installation guide for multiple OS platforms. Then why this post?

The latest version of TensorFlow with GPU support (version 1.8 at the time this post is published) is built against CUDA 9.0. However, NVIDIA has released CUDA 9.1 and there is possibility of newer version release in the near future. Given that TensorFlow is lagging behind the CUDA GA version, the publicly released TensorFlow bundle cannot immediately work on the system having only the latest CUDA version installed. A remedy for this is by installing from source, which can be non-trivial especially for those who are not so familiar with the source build mechanism.

The final system setup after completing the installation steps explained in the posts will be as follows.

ItemValue
OSUbuntu 16.04
NVIDIA driver version390.48
CUDA version9.1
cuDNN version7.1.3
NCCL version2.1.15
Python version2.7.12
Python install methodvirtualenv
TensorFlow version1.8.0

Note that the components will be updated in the future. This implies version upgrade for the components. It is expected that this post will still be valid even after version upgrade. Under the circumstances where this post becomes invalid, the content will be updated or another post will be written. Yet, this would be realized with sufficient comments or feedback regarding existing content. Continue reading

How to Install NVIDIA Collective Communications Library (NCCL) 2 for TensorFlow on Ubuntu 16.04

NVIDIA Collective Communications Library (NCCL) is a library developed to provide parallel computation primitives on multi-GPU and multi-node environment. The idea is to enable GPUs to collectively work to complete certain computing task. This is especially helpful when the computation is complex. With multiple GPUs working together, the task will be completed in less time, rendering a more performing system. People with background or experience in distributed system, such as Hadoop, may immediately relate this concept with similar model applied in the traditional distributed system. Hadoop, for example, supports MapReduce programming model that splits a compute job into chunks that are spread into the slave nodes and collected back by the master to produce the final output. Continue reading

How to Properly Install NVIDIA Graphics Driver on Ubuntu 16.04

In the recent posts, we have been going through the installation of deep learning framework like Caffe2 and its dependencies, such as CUDA or cuDNN. In this post, we will go few steps back to the very basic prerequisite of setting up a GPU-powered deep learning system: display driver installation. We will specifically focus on NVIDIA display driver installation due to the pervasiveness and robustness of NVIDIA GPUs as deep learning infrastructure.

Key Terminologies

Before proceeding to the installation, let’s discuss some key terminologies related with the use of NVIDIA GPUs as the computing infrastructure in a deep learning system.

GPU: Graphical / Graphics Processing Unit. A unit of computation, in a form of a small chip on the graphics card, traditionally intended to perform rapid computation for image / graphics rendering and display purpose. A graphics card can contain one or more GPUs while one GPU can be built of hundreds or thousands of cores.

CUDA: A parallel programming model and the implementation as a computing platform developed by NVIDIA to perform computation on the GPUs. CUDA was designed to speed up computation by harnessing the power of the parallel computation utilizing hundreds or thousands of the GPU cores.

CUDA-enabled GPUs: NVIDIA GPUs that support CUDA programming model and implementation

CUDA compute capability: A number that refers to the general specifications and available features especially in terms of parallel computing methods of a CUDA-enabled GPU. The full list of the available features in each compute capability can be seen here.

Note on CUDA compute capability and deep learning:
It is important to note that if you plan to use an NVIDIA GPU for deep learning purpose, you need to make sure that the compute capability of the GPU is at least 3.0 (Kepler architecture). Continue reading