StatefulSet recreates pod, why?

You should look into two things:

  1. Debug Pods

Check the current state of the pod and recent events with the following command:

kubectl describe pods ${POD_NAME} Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts?

Continue debugging depending on the state of the pods.

Especially take a closer look at why the Pod crashed.

More info can be found in the links I have provided.

  1. Debug StatefulSets.

StatefulSets provide a debug mechanism to pause all controller operations on Pods using an annotation. Setting the pod.alpha.kubernetes.io/initialized annotation to "false" on any StatefulSet Pod will pause all operations of the StatefulSet. When paused, the StatefulSet will not perform any scaling operations. Once the debug hook is set, you can execute commands within the containers of StatefulSet pods without interference from scaling operations. You can set the annotation to "false" by executing the following:

kubectl annotate pods <pod-name> pod.alpha.kubernetes.io/initialized="false" --overwrite

When the annotation is set to "false", the StatefulSet will not respond to its Pods becoming unhealthy or unavailable. It will not create replacement Pods till the annotation is removed or set to "true" on each StatefulSet Pod.

Please let me know if that helped.


Another nifty little trick I came up with is to describe the pod as soon as it stops logging, by using

kubectl logs -f mypod && kubectl describe pod mypod

When the pod fails and stops logging, the kubectl logs -f mypod will terminate and then the shell will immediately execute kubectl describe pod mypod, (hopefully) letting you catch the state of the failing pod before it is recreated.

In my case it was showing

    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137

in line with what Timothy is saying