Java: manually-unrolled loop is still faster than the original loop. Why?

The loop presented likely falls under the "non counted" category of loops, which are loops for which the iteration count can neither be determined at compile time nor at run time. Not only because of @Andreas argument about the array size but also because of the randomly conditional break (that used to be in your benchmark when I wrote this post).

State-of-the-art compilers do not aggressively optimize them, since unrolling non-counted loops often involves duplicating also a loop’s exit condition, which thus only improves run-time performance if subsequent compiler optimizations can optimize the unrolled code. See this 2017 paper for details where they make proposals how to unroll such stuff too.

From this follows, that your assumption does not hold that you did sort of "manual unrolling" of the loop. You're considering it a basic loop unrolling technique to transform an iteration over an array with conditional break to an && chained boolean expression. I'd consider this a rather special case and would be surprised to find a hot-spot optimizer do a complex refactoring on the fly. Here they're discussing what it actually might do, perhaps this reference is interesting.

This would reflect closer the mechanics of a contemporary unrolling and is perhaps still nowhere near what unrolled machine code would look like:

if (! filters[0].isOK(i))
{
   return false;
} 
if(! filters[1].isOK(i))
{
   return false;
}
return true;

You're concluding, that because one piece of code runs faster than another piece of code the loop didn't unroll. Even if it did, you still could see the runtime difference due to the fact that you're comparing different implementations.

If you want to gain more certainty, there's the jitwatch analyzer/visualizer of the actual Jit operations including machine code (github) (presentation slides). If there's something to see eventually I'd trust my own eyes more than any opinion about what JIT may or may not do in general, since every case has its specifics. Here they fret about the difficulty to arrive at general statements for specific cases as far as JIT is concerned and provide some interesting links.

Since your goal is minimum runtime, the a && b && c ... form is likely the most efficient one, if you don't want to depend on hope for loop-unrolling, at least more efficient than anything else presented yet. But you can't have that in a generic way. With functional composition of java.util.Function there's huge overhead again (each Function is a class, each call is a virtual method that needs dispatch). Perhaps in such a scenario it might make sense to subvert the language level and generate custom byte code at runtime. On the other hand a && logic requires branching in byte code level as well and may be equivalent to if/return (which also can't be generified without overhead).


TL;DR The main reason of performance difference here is not related to loop unrolling. It is rather the type speculation and the inline caches.

Unrolling strategies

In fact, in HotSpot terminology, such loops are treated as counted, and in certain cases JVM can unroll them. Not in your case though.

HotSpot has two loop unrolling strategies: 1) unroll maximally, i.e. remove the loop altogether; or 2) glue several consecutive iterations together.

Maximal unrolling can be done, only if the exact number of iterations is known.

  if (!cl->has_exact_trip_count()) {
    // Trip count is not exact.
    return false;
  }

In your case, however, the function may return early after the first iteration.

Partial unrolling could be probably applied, but the following condition breaks unrolling:

  // Don't unroll if the next round of unrolling would push us
  // over the expected trip count of the loop.  One is subtracted
  // from the expected trip count because the pre-loop normally
  // executes 1 iteration.
  if (UnrollLimitForProfileCheck > 0 &&
      cl->profile_trip_cnt() != COUNT_UNKNOWN &&
      future_unroll_ct        > UnrollLimitForProfileCheck &&
      (float)future_unroll_ct > cl->profile_trip_cnt() - 1.0) {
    return false;
  }

Since in your case the expected trip count is less than 2, HotSpot assumes it's not worthy to unroll even two iterations. Note that the first iteration is extracted into pre-loop anyway (loop peeling optimization), so unrolling is indeed not very benificial here.

Type speculation

In your unrolled version, there are two different invokeinterface bytecodes. These sites have two distinct type profiles. The first receiver is always Filter1, and the second receiver is always Filter2. So, you basically have two monomorphic call sites, and HotSpot can perfectly inline both calls - so called "inline cache" which has 100% hit ratio in this case.

With the loop, there is just one invokeinterface bytecode, and only one type profile is collected. HotSpot JVM sees that filters[j].isOK() is called 86% times with Filter1 receiver and 14% times with Filter2 receiver. This will be a bimorphic call. Fortunately, HotSpot can speculatively inline bimorphic calls, too. It inlines both targets with a conditional branch. However, in this case the hit ratio will be at most 86%, and the performance will suffer from the corresponding mispredicted branches at the architecture level.

Things will be even worse, if you have 3 or more different filters. In this case isOK() will be a megamorphic call which HotSpot cannot inline at all. So, the compiled code will contain a true interface call which has a larger performance impact.

More about speculative inlining in the article The Black Magic of (Java) Method Dispatch.

Conclusion

In order to inline virtual/interface calls, HotSpot JVM collects type profiles per invoke bytecode. If there is a virtual call in a loop, there will be just one type profile for the call, no matter if the loop is unrolled or not.

To get the best from the virtual call optimizations, you'd need to manually split the loop, primarily for the purpose of splitting type profiles. HotSpot cannot do this automatically so far.