How different are 8-bit microcontrollers from 32-bit microcontrollers when it comes to programming them

In general, going from 8 to 16 to 32 bit microcontrollers means you will have fewer restraints on resources, particularly memory, and the width of registers used for doing arithmetic and logical operations. The 8, 16, and 32-bit monikers generally refers to both the size of the internal and external data busses and also the size of the internal register(s) used for arithmetic and logical operations (used to be just one or two called accumulators, now there are usually register banks of 16 or 32).

I/O port port sizes will also generally follow the data bus size, so an 8-bit micro will have 8-bit ports, a 16-bit will have 16-bit ports etc.

Despite having an 8-bit data bus, many 8-bit microcontrollers have a 16-bit address bus and can address 2^16 or 64K bytes of memory (that doesn't mean they have anywhere near that implemented). But some 8-bit micros, like the low-end PICs, may have only a very limited RAM space (e.g. 96 bytes on a PIC16).

To get around their limited addressing scheme, some 8-bit micros use paging, where the contents of a page register determines one of several banks of memory to use. There will usually be some common RAM available no matter what the page register is set to.

16-bit microcontroller are generally restricted to 64K of memory, but may also use paging techniques to get around this. 32-bit microcontrollers of course have no such restrictions and can address up to 4GB of memory.

Along with the different memory sizes is the stack size. In the lower end micros, this may be implemented in a special area of memory and be very small (many PIC16's have an 8-level deep call stack). In the 16-bit and 32-bit micros, the stack will usually be in general RAM and be limited only by the size of the RAM.

There are also vast differences in the amount of memory -- both program and RAM -- implemented on the various devices. 8-bit micros may only have a few hundred bytes of RAM, and a few thousand bytes of program memory (or much less -- for example the PIC10F320 has only 256 14-bit words of flash and 64 bytes of RAM). 16-bit micros may have a few thousand bytes of RAM, and tens of thousand of bytes of program memory. 32-bit micros often have over 64K bytes of RAM, and maybe 1/2 MB or more of program memory (the PIC32MZ2048 has 2 MB of flash and 512KB of RAM; the newly released PIC32MZ2064DAH176, optimized for graphics has 2 MB of flash and a whopping 32MB of on-chip RAM).

If you are programming in assembly language, the register-size limitations will be very evident, for example adding two 32-bit numbers is a chore on an 8-bit microcontroller but trivial on a 32-bit one. If you are programming in C, this will be largely transparent, but of course the underlying compiled code will be much larger for the 8-bitter.

I said largely transparent, because the size of various C data types may be different from one size micro to another; for example, a compiler which targets a 8 or 16-bit micro may use "int" to mean a 16-bit signed variable, and on a 32-bit micro this would be a 32-bit variable. So a lot of programs use #defines to explicitly say what the desired size is, such as "UINT16" for an unsigned 16-bit variable.

If you are programming in C, the biggest impact will be the size of you variables. For example, if you know a variable will always be less than 256 (or in the range -128 to 127 if signed), then you should use an 8-bit (unsigned char or char) on an 8-bit micro (e.g. PIC16) since using a larger size will be very inefficient. Likewise re 16-bit variables on a 16-bit micro (e.g. PIC24). If you are using a 32-bit micro (PIC32), then it doesn't really make any difference since the MIPS instruction set has byte, word, and double-word instructions. However on some 32-bit micros, if they lack such instructions, manipulating an 8-bit variable may be less efficient than a 32-bit one due to masking.

As forum member vsz pointed out, on systems where you have a variable that is larger than the default register size (e.g. a 16-bit variable on an 8-bit micro), and that variable is shared between two threads or between the base thread and an interrupt handler, one must make any operation (including just reading) on the variable atomic, that is make it appear to be done as one instruction. This is called a critical section. The standard way to mitigate this is to surround the critical section with a disable/enable interrupt pair.

So going from 32-bit systems to 16-bit, or 16-bit to 8-bit, any operations on variables of this type that are now larger than the default register size (but weren't before) need to be considered a critical section.

Another main difference, going from one PIC processor to another, is the handling of peripherals. This has less to do with word size and more to do with the type and number of resources allocated on each chip. In general, Microchip has tried to make the programming of the same peripheral used across different chips as similar as possible (e.g. timer0) , but there will always be differences. Using their peripheral libraries will hide these differences to a large extent. A final difference is the handling of interrupts. Again there is help here from the Microchip libraries.


One common difference between 8-bit and 32-bit microcontrollers is that 8-bit ones often have a range of memory and I/O space which may be accessed in a single instruction, regardless of execution context, while 32-bit microcontrollers will frequently require a multi-instruction sequence. For example, on a typical 8-bit microcontroller (HC05, 8051, PIC-18F, etc.) one may change the state of a port bit using a single instruction. On a typical ARM (32-bit), if register contents were initially unknown, a four-instruction instruction sequence would be needed:

    ldr  r0,=GPIOA
    ldrh r1,[r0+GPIO_DDR]
    ior  r1,#64
    strh r1,[r0+GPIO_DDR]

In most projects, the controller will spends the vast majority of its time doing things other than setting or clearing individual I/O bits, so the fact that operations like clearing a port pin require more instructions often won't matter. On the other hand, there are times when code will have to "big-bang" a lot of port manipulations, and the ability to do such things with a single instruction can prove quite valuable.

On the flip side, 32-bit controllers are invariably designed to efficiently access many kinds of data structures which can be stored in memory. Many 8-bit controllers, by comparison, are very inefficient at accessing data structures which aren't statically allocated. A 32-bit controller may perform in one instruction an array access that would take half a dozen or more instructions on a typical 8-bit controller.


The biggest practical difference is the amount of documentation, really, to entirely understand the entire chip. There are 8-bit microcontrollers out there that come with almost a 1000 pages of documentation. Compare that to roughly 200-300 pages worth for a 1980's 8 bit CPU and the popular peripheral chips it would be used with. A peripheral-rich 32 bit device will require you to go through 2000-10,000 pages of documentation to understand the part. The parts with modern 3D graphics edge on 20k pages of documentation.

In my experience, it takes about 10x as long to know everything there's to be known about a given modern 32 bit controller as it would for a modern 8 bit part. By "everything" I mean that you know how to use all of the peripherals, even in unconventional ways, and know the machine language, the assembler the platform uses as well as other tools, the ABI(s), etc.

It is not inconceivable at all that many, many designs are done with partial understanding. Sometimes it's inconsequential, sometimes it isn't. Switching platforms has to be done with the understanding that there'll be a short- and mid-term price in productivity that you pay for perceived productivity gains from a more powerful architecture. Do your due diligence.