How to upgrade a running Elasticsearch older instance to a newer version?

There is a lot more information about ElasticSearch upgrade these days than it used to be.

Here are my usual steps when upgrading ElasticSearch:

  1. Backup the data : Snapshot and Restore

  2. Upgrade Guide : Upgrading ElasticSearch

The main idea is that you shut down one instance of the ES cluster at a time, upgrade the ES version on that instance node, and bring it up again so it can join back the cluster.

In brief, here are the important steps:

  1. Disable Shard reallocation

    curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" : { "cluster.routing.allocation.enable" : "none" } }'

  2. Shutdown the instance:

    curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'

  3. Install the new ElasticSearch version on the host and Start it.

  4. Enable shard re-allocation:

    curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" : { "cluster.routing.allocation.enable" : "all" } }'

  5. Watch cluster go from yellow state to green with:

curl -X GET http://localhost:9200/_cat/health?v // monitors the overal cluster state

curl -X GET http://localhost:9200/_cat/nodes?v // verify that the new node joined the cluster

curl -X GET http://localhost:9200/_cat/shards?v // see shards being started, initialized and relocated

  1. Repeat for the next node.

In the terms of ordering, update first the master nodes, then data nodes, then load-balancing/client nodes.


  1. All node data is stored in elasticsearch data directory. It's data/cluster_name/nodes by default in elasticsearch home. So, in general, as long as data directory is preserved and config files in the new version are compatible with the old version, the new instance should have the same data as the old one. Please note that some releases have special additional requirements outlined in release notes. For example, upgrade to 0.19 from 0.18 requires issuing a full flush of all the indices in the cluster.

  2. There is really no good way to accomplish this. Nodes are communicating using binary protocol that is not backward compatible. So, if protocol in the new version changes, old nodes and new nodes are unable to understand each other. Sometimes it's possible to mix nodes with different minor versions within the same cluster and do rolling upgrade. However, as far as I understand, there is no explicit guarantee on compatibility between nodes even within minor releases and major releases always require require full cluster restart. If downtime during full cluster restart is not an option, a nice technique by DrTech might be a solution.