Using millis() and micros() inside an interrupt routine

The other answers are very good, but I want to elaborate on how micros() works. It always reads the current hardware timer (possibly TCNT0) which is constantly being updated by the hardware (in fact, every 4 µs because of the prescaler of 64). It then adds in the Timer 0 overflow count, which is updated by a timer overflow interrupt (multiplied by 256).

Thus, even inside an ISR, you can rely on micros() updating. However if you wait too long then you miss the overflow update, and then the result returned will go down (i.e you will get 253, 254, 255, 0, 1, 2, 3 etc.)

This is micros() - slightly simplified to remove defines for other processors:

unsigned long micros() {
    unsigned long m;
    uint8_t oldSREG = SREG, t;
    cli();
    m = timer0_overflow_count;
    t = TCNT0;
    if ((TIFR0 & _BV(TOV0)) && (t < 255))
        m++;
    SREG = oldSREG;
    return ((m << 8) + t) * (64 / clockCyclesPerMicrosecond());
}

The code above allows for the overflow (it checks the TOV0 bit) so it can cope with the overflow while interrupts are off but only once - there is no provision for handling two overflows.


TLDR;

  • Don't do delays inside an ISR
  • If you must do them, you can time then with micros() but not millis(). Also delayMicroseconds() is a possibility.
  • Don't delay more than 500 µs or so, or you'll miss a timer overflow.
  • Even short delays may cause you to you miss incoming serial data (at 115200 baud you will get a new character every 87 µs).

It is not wrong to use millis() or micros() within an interrupt routine.

It is wrong to use them incorrectly.

The main thing here is that while you are in an interrupt routine "the clock isn't ticking". millis() and micros() won't change (well, micros() will initially, but once it goes past that magic millisecond point where a millisecond tick is required it all falls apart.)

So you can certainly call on millis() or micros() to find out the current time within your ISR, but don't expect that time to change.

It is that lack of change in the time that is being warned about in the quote you provide. delay() relies on millis() changing to know how much time has passed. Since it doesn't change delay() can never finish.

So essentially millis() and micros() will tell you the time when your ISR was called no matter when in your ISR you use them.


The quoted phrase is not a warning, it is merely a statement about how things work.

There's nothing intrinsically wrong with using millis() or micros() within a properly-written interrupt routine.

On the other hand, doing anything at all within an improperly-written interrupt routine is by definition wrong.

An interrupt routine that takes more than a few microseconds to do its job is, in all likelihood, improperly written.

In short: A properly-written interrupt routine will not cause or encounter issues with millis() or micros().

Edit: Regarding “why micros() "starts behaving erratically"”, as explained in an “examination of the Arduino micros function ” webpage, micros() code on an ordinary Uno is functionally equivalent to

unsigned long micros() {
  return((timer0_overflow_count << 8) + TCNT0)*(64/16);
}

This returns a four-byte unsigned long comprised of the three lowest bytes from timer0_overflow_count and one byte from the timer-0 count register.

The timer0_overflow_count is incremented about once per millisecond by the TIMER0_OVF_vect interrupt handler, as explained in an examination of the arduino millis function webpage.

Before an interrupt handler begins, AVR hardware disables interrupts. If (for example) an interrupt handler were to run for five milliseconds with interrupts still disabled, at least four timer 0 overflows would be missed. [Interrupts written in C code in the Arduino system are not reentrant (capable of correctly handling multiple overlapping executions within the same handler) but one could write a reentrant assembly language handler that reenables interrupts before it begins a time-consuming process.]

In other words, timer overflows don't “stack up”; whenever an overflow occurs before the interrupt from the previous overflow has been handled, the millis() counter loses a millisecond, and the discrepancy in timer0_overflow_count in turn makes micros() wrong by a millisecond too.

Regarding “shorter than 500 μs” as an upper time limit for interrupt processing, “to prevent blocking the timer interrupt for too long”, you could go up to just under 1024 μs (eg 1020 μs) and millis() still would work, most of the time. However, I regard an interrupt handler that takes more than 5 μs as a sluggard, more than 10 μs as slothful, more than 20 μs as snail-like.