Performance of synchronize section in Java

There is some overhead in acquiring a non-contested lock, but on modern JVMs it is very small.

A key run-time optimization that's relevant to this case is called "Biased Locking" and is explained in the Java SE 6 Performance White Paper.

If you wanted to have some performance numbers that are relevant to your JVM and hardware, you could construct a micro-benchmark to try and measure this overhead.


Single-threaded code will still run slower when using synchronized blocks. Obviously you will not have other threads stalled while waiting for other threads to finish, however you will have to deal with the other effects of synchronization, namely cache coherency.

Synchronized blocks are not only used for concurrency, but also visibility. Every synchronized block is a memory barrier: the JVM is free to work on variables in registers, instead of main memory, on the assumption that multiple threads will not access that variable. Without synchronization blocks, this data could be stored in a CPU's cache and different threads on different CPUs would not see the same data. By using a synchronization block, you force the JVM to write this data to main memory for visibility to other threads.

So even though you're free from lock contention, the JVM will still have to do housekeeping in flushing data to main memory.

In addition, this has optimization constraints. The JVM is free to reorder instructions in order to provide optimization: consider a simple example:

foo++;
bar++;

versus:

foo++;
synchronized(obj)
{
    bar++;
}

In the first example, the compiler is free to load foo and bar at the same time, then increment them both, then save them both. In the second example, the compiler must perform the load/add/save on foo, then perform the load/add/save on bar. Thus, synchronization may impact the ability of the JRE to optimize instructions.

(An excellent book on the Java Memory Model is Brian Goetz's Java Concurrency In Practice.)


There are 3 type of locking in HotSpot

  1. Fat: JVM relies on OS mutexes to acquire lock.
  2. Thin: JVM is using CAS algorithm.
  3. Biased: CAS is rather expensive operation on some of the architecture. Biased locking - is special type of locking optimized for scenario when only one thread is working on object.

By default JVM uses thin locking. Later if JVM determines that there is no contention thin locking is converted to biased locking. Operation that changes type of the lock is rather expensive, hence JVM does not apply this optimization immediately. There is special JVM option - XX:BiasedLockingStartupDelay=delay which tells JVM when this kind of optimization should be applied.

Once biased, that thread can subsequently lock and unlock the object without resorting to expensive atomic instructions.

Answer to the question: it depends. But if biased, the single threaded code with locking and without locking has average same performance.

  • Biased Locking in HotSpot - Dave Dice's Weblog
  • Synchronization and Object Locking - Thomas Kotzmann and Christian Wimmer