Tag Archives: CPU

How to Resolve The Error “Illegal instruction (core dumped)” when Running “import tensorflow” in a Python Program

There are several ways to install TensorFlow on Ubuntu. The easiest way is to install via pip. Unfortunately, this easy installation may result in a bumpy first time experience of running TensorFlow. Consider the following one line Python script:

$ python -c 'import tensorflow as tf;'

This should be where the excitement begins, the moment where conviction about the new era of AI-powered banalities starts to bloom. Yet, the reality can be unexpectedly different. Executing the command may immediately raise this very infamous error:

Illegal instruction (core dumped)

This means that TensorFlow has crashed even before it does anything. What a surprise!

The good thing is that we can run gdb to debug Python and start analyzing the call stack. But what’s even better is that we can save the brilliance for later. This error has been repeatedly reported and has conveniently sat on its fame for a while, as reflected on the issue page. Continue reading

How to Build and Install The Latest TensorFlow without CUDA GPU and with Optimized CPU Performance on Ubuntu

In this post, we are about to accomplish something less common: building and installing TensorFlow with CPU support-only on Ubuntu server / desktop / laptop. We are targeting machines with older CPU, as for example those without Advanced Vector Extensions (AVX) support. This kind of setup can be a choice when we are not using TensorFlow to build a new AI model but instead only for obtaining the prediction (inference) served by a trained AI model. Compared with model training, the model inference is less computational intensive. Hence, instead of performing the computation using GPU acceleration, the task can be simply handled by CPU.

tl;dr The WHL file from TensorFlow CPU build is available for download from this Github repository.

Since we will build TensorFlow with CPU support only, the physical server will not need to be equipped with additional graphics card(s) to be mounted on the PCI slot(s). This is different with the case when we build TensorFlow with GPU support. For such case, we need to have at least one external (non built-in) graphics card that supports CUDA. Naturally, running TensorFlow with CPU pertains to be an economical approach to deep learning. Then how about the performance? Some benchmark results have shown that GPU performs better than CPU when performing deep learning tasks, especially for model training. However, this does not mean that TensorFlow CPU cannot be a feasible option. With proper CPU optimization, TensorFlow can exhibit improved performance that is comparable to its GPU counterpart. When cost is a more serious issue, let’s say we can only do the model training and inference in the cloud, leaning towards TensorFlow CPU can be a decision that also makes more sense from financial standpoint. Continue reading