Elasticsearch 7.2.0: master not discovered or elected yet, an election requires at least X nodes

The cluster.initial_master_nodes setting only has an effect the first time the cluster starts up, but to avoid some very rare corner cases you should never change its value once you've set it and generally you should remove it from the config file as soon as possible. From the reference manual regarding cluster.initial_master_nodes:

You should not use this setting when restarting a cluster or adding a new node to an existing cluster.

Aside from that, Elasticsearch uses a quorum-based election protocol and says the following:

To be sure that the cluster remains available you must not stop half or more of the nodes in the voting configuration at the same time.

You have stopped two of your three master-eligible nodes at the same time, which is more than half of them, so it's expected that the cluster no longer works.

The reference manual also contains instructions for removing master-eligible nodes which you have not followed:

As long as there are at least three master-eligible nodes in the cluster, as a general rule it is best to remove nodes one-at-a-time, allowing enough time for the cluster to automatically adjust the voting configuration and adapt the fault tolerance level to the new set of nodes.

If there are only two master-eligible nodes remaining then neither node can be safely removed since both are required to reliably make progress. To remove one of these nodes you must first inform Elasticsearch that it should not be part of the voting configuration, and that the voting power should instead be given to the other node.

It goes on to describe how to safely remove the unwanted nodes from the voting configuration using POST /_cluster/voting_config_exclusions/node_name when scaling down to a single node.


Cluster state which also stores the master configuration stores on the data folder of Elasticsearch node, In your case, it seems it is reading the old-cluster state(which is 3 master nodes, with their ids).

Could you delete the data folder of your master-a, so that it can start from a clean cluster state and it should resolve your issue.

Also make sure, other data and ingest node have master.node:false setting as by default it's true.