microcontroller programming vs object oriented programming

You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.

Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.

You have to get into an assembly-language mindset, and write procedurally.

You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.

You have to learn how to write interrupt service routines which are efficient (usually in assembly language).

You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).

You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).

It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers.


You need to think about several things:

  • You will use C as the language
  • You can still create a feeling of object orientation using function pointers so that you can override functions etc. I have used this method in the past and current projects and works very well. So OO is partially there but not in C++ sense.

There are other limitations that will come in to play such as limited speed and memory. So as a general guideline, I avoid:

  • Using heap, if there is a way to solve the problem without Malloc, I do that. For example, I preallocate buffers and just use them.
  • I intentionally reduce the stack size in compiler settings to face with stack size issues early on, optimize that carefully.
  • I assume every single line of code will be interrupted by an event, so I avoid non reentrant code
  • I assume even interrupts are nested so I write that code accordingly
  • I avoid using OS unless it is necessary. 70% of the embedded projects doesn't really need an OS. If I must use an OS, I only use something with source code available. (Freertos etc)
  • if I am using an OS, I almost always abstract things so that I can change OS in a matter of hours.
  • For drivers etc. I will only use the libraries provided by the vendor, I never ever directly fiddle the bits, unless I have no other choice. This makes the code readable and improves debugging.
  • I look at the loops and other stuff, especially in ISR, to make sure they are fast enough.
  • I always keep a few GPIOs handy to measure stuff, context switching, ISR run time etc.

List goes on, I am probably below average in terms of software programming, I am sure there are better practices.


I do both, so here's my view.

I think the most important skill by far in embedded is your debugging ability. The required mindset is much different in that so much more can go wrong, and you must be very open to considering all the different ways what you are trying to do can go wrong.

This is the single biggest issue for new embedded developers. PC people tend to have it rougher, as they're used to so much just working for them. They'll tend to waste a lot of time searching for tools to do things for them instead (hint: there aren't many). There's a lot of banging heads into walls over and over, not knowing what else to do. If you feel you're getting stuck, step back and figure out if you can identify what all might be going wrong. Systematically go through narrowing your potential problems list until you figure it out. It follows directly from this process that you should limit the scope of problems by not changing too much at once.

Experienced embedded people tend to take debugging for granted... most of the people who can't do it well don't last long (or work in large companies that simply accept "firmware is hard" as an answer for why a certain feature is years late)

You're working on code that runs on an external system to your development system, with varying degrees of visibility into your target from platform to platform. If under your control, push for development aids to help increase this visibility into your target system. Use debug serial ports, bit banging debug output, the famous blinking light, etc. Certainly at a minimum learn how to use an oscilloscope and use pin I/O with the 'scope to see when certain functions enter/exit, ISRs fire, etc. I've watched people struggle for literally years longer than necessary simply because they never bothered to set up/learn how to use a proper JTAG debugger link.

It's much more important to be very aware of exactly what resources you have relative to a PC. Read the datasheets carefully. Consider the resource 'cost' of anything you are trying to do. Learn resource-oriented debugging tricks like filling stack space with a magic value to track stack usage.

While some degree of debugging skill is required for both PC and embedded software, it's much more important with embedded.