ADVERTISEMENT

Are binaries portable across different CPU architectures?

No. Binaries must be (re)compiled for the target architecture, and Linux offers nothing like fat binaries out of the box. The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different).

EDIT: it is worth noting that some architectures offer levels of backwards compatibility (and even rarer, compatibility with other architectures); on 64-bit CPU's, it's common to have backwards compatibility to 32-bit editions (but remember: your dependent libraries must also be 32-bit, including your C standard library, unless you statically link). Also worth mentioning is Itanium, where it was possible to run x86 code (32-bit only), albeit very slowly; the poor execution speed of x86 code was at least part of the reason it wasn't very successful in the market.

Bear in mind that you still cannot use binaries compiled with newer instructions on older CPU's, even in compatibility modes (for example, you cannot use AVX in a 32-bit binary on Nehalem x86 processors; the CPU just doesn't support it.

Note that kernel modules must be compiled for the relevant architecture; in addition, 32-bit kernel modules will not work on 64-bit kernels or vice versa.

For information on cross-compiling binaries (so you don't have to have a toolchain on the target ARM device), see grochmal's comprehensive answer below.


Elizabeth Myers is correct, each architecture requires a compiled binary for the architecture in question. To build binaries for a different architecture than your system runs on you need a cross-compiler.


In most cases you need to compile a cross compiler. I only have experience with gcc (but I believe that llvm, and other compilers, have similar parameters). A gcc cross-compiler is achieved by adding --target to the configure:

./configure --build=i686-arch-linux-gnu --target=arm-none-linux-gnueabi

You need to compile gcc, glibc and binutils with these parameters (and provide the kernel headers of the kernel at the target machine).

In practice this is considerably more complicated and different build errors pop out on different systems.

There are several guides out there on how to compile the GNU toolchain but I'll recommend the Linux From Scratch, which is continuously maintained and does a very good job at explaining what the presented commands do.

Another option is a bootstrap compilation of a cross-compiler. Thanks to the struggle of compiling cross compilers to different architectures on different architectures crosstool-ng was created. It gives a bootstrap over the toolchain needed to build a cross compiler.

crosstool-ng supports several target triplets on different architectures, basically it is a bootstrap where people dedicate their time to sort out problems occurring during the compilation of a cross-compiler toolchain.


Several distros provide cross-compilers as packages:

  • arch provides a mingw cross-compiler and and an arm eabi cross compiler out of the box. Apart from other cross compilers in AUR.

  • fedora contains several packaged cross-compilers.

  • ubuntu provides an arm cross-compiler too.

  • debian has an entire repository of cross-toolchains

In other words, check what your distro has available in terms of cross compilers. If your distro does not have a cross compiler for your needs you can always compile it yourself.

References:

  • ubuntu: Cross-Compile for ARM?

Kernel modules note

If you are compiling your cross-compiler by hand, you have everything you need to compile kernel modules. This is because you need the kernel headers to compile glibc.

But, if you are using a cross-compiler provided by your distro, you will need the kernel headers of the kernel that runs on the target machine.


Note that as a last resort (i.e. when you don't have the source code), you can run binaries on a different architecture using emulators like qemu, dosbox or exagear. Some emulators are designed to emulate systems other than Linux (e.g. dosbox is designed to run MS-DOS programs, and there are plenty of emulators for popular gaming consoles). Emulation has a significant performance overhead: emulated programs run 2-10 times slower than their native counterparts.

If you need to run kernel modules on a non-native CPU, you'll have to emulate the whole OS including the kernel for the same architecture. AFAIK it's impossible to run foreign code inside Linux kernel.