TensorFlow on 32-bit Linux?

We have only tested the TensorFlow distribution on 64-bit Linux and Mac OS X, and distribute binary packages for those platforms only. Try following the source installation instructions to build a version for your platform.

EDIT: One user has published instructions for running TensorFlow on a 32-bit ARM processor, which is promising for other 32-bit architectures. These instructions may have useful pointers for getting TensorFlow and Bazel to work in a 32-bit environment.


I've built a CPU-only version of TensorFlow on 32-bit Ubuntu (16.04.1 Xubuntu). It went a lot more smoothly than anticipated, for such a complex library that doesn't support 32-bit architectures officially.

It can be done by following a subset of the intersection of these two guides:

  • November 2015 walkthrough about Jetson TK1.
  • November 2016 walkthrough about Jetson TX1.

If I haven't forgotten anything, here are the steps I've taken:

  1. Install Oracle Java 8 JDK:

    $ sudo apt-get remove icedtea-8-plugin  #This is just in case
    $ sudo add-apt-repository ppa:webupd8team/java
    $ sudo apt-get update
    $ sudo apt-get install oracle-java8-installer
    

(This is all you need in a pristine Xubuntu install, but google the above keywords otherwise, to read about selecting a default JRE and javac.)

  1. Dependencies:

    sudo apt-get update
    sudo apt-get install git zip unzip swig python-numpy python-dev python-pip python-wheel
    pip install --upgrade pip
    
  2. Following the instructions that come with Bazel, download a Bazel source zip (I got bazel-0.4.3-dist.zip), make a directory like ~/tf/bazel/ and unzip it there.

  3. I was getting an OutOfMemoryError during the following build, but this fix took care of it (i.e. adding the -J-Xmx512m for the bootstrap build).

  4. Call bash ./compile.sh, and wait for a long time (overnight for me, but see the remarks at the end).

  5. $ git clone -b r0.12 https://github.com/tensorflow/tensorflow

  6. This seems like the only change to the source code that was necessary!

    $ cd tensorflow
    $ grep -Rl "lib64"| xargs sed -i 's/lib64/lib/g'
    
  7. Then $ ./configure and say no to everything. (Accept defaults where relevant.)

  8. The following took quite a few hours with my setup:

    $ bazel build -c opt --jobs 1 --local_resources 1024,0.5,1.0 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
    $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    $ pip install --user /tmp/tensorflow_pkg/ten<Press TAB here>
    

To see that it's installed, see if it works on the TensorFlow Beginners tutorial. I use jupyter qtconsole (i.e. the new name of IPython). Run the code in the mnist_softmax.py. It should take little time even on very limited machines.

For some reason, TensorFlow's guide to building from source doesn't suggest running the unit tests:

$ bazel test //tensorflow/...

(Yes, type in the ellipses.)

Though I couldn't run them — it spent 19 hours trying to link libtensorflow_cc.so, and then something killed the linker. This was with half a core and 1536 MB memory limit. Maybe someone else, with a larger machine, can report on how the unit tests go.

Why didn't we need to do the other things mentioned in those two walkthroughs? Firstly, most of that work is about taking care of GPU interfacing. Secondly, both Bazel and TensorFlow have become more self-contained since the first of those walkthroughs was written.

Note that the above settings provided to Bazel for the build are very conservative (1024 MB RAM, half a core, one job at a time), because I'm running this through VirtualBox using a single core of a $200 netbook of the type that Intel makes for disadvantaged kids in Venezuela, Pakistan and Nigeria. (By the way, if you do this, make sure the virtual HDD is 20 GB at the very least — trying to build the unit tests above took about 5 GB of space.) The build of the wheel took almost 20 hours and the modest deep CNN from the second tutorial, which is quoted to take up to half an hour to run on modern desktop CPUs, takes about 80 hours under this setup. One might wonder why I don't get a desktop, but the truth is that actual training with TensorFlow only makes sense on a high-end GPU (or a bunch thereof), and when we can hire an AWS spot instance with such a GPU for about 10 cents an hour without commitment and on a workable ad-hoc basis, it doesn't make a lot of sense to be training elsewhere. The 480000% speed-up is really noticeable. On the other hand, the convenience of having a local installation is well worth going through a process such as above.

Tags:

Tensorflow