Kubernetes pods failing on "Pod sandbox changed, it will be killed and re-created"

I can see following message posted in Google Cloud Status Dashboard:

"We are investigating an issue affecting Google Container Engine (GKE) clusters where after docker crashes or is restarted on a node, pods are unable to be scheduled.

The issue is believed to be affecting all GKE clusters running Kubernetes v1.6.11, v1.7.8 and v1.8.1.

Our Engineering Team suggests: If nodes are on release v1.6.11, please downgrade your nodes to v1.6.10. If nodes are on release v1.7.8, please downgrade your nodes to v1.7.6. If nodes are on v1.8.1, please downgrade your nodes to v1.7.6.

Alternative workarounds are also provided by the Engineering team in this doc . These workarounds are applicable to the customers that are unable to downgrade their nodes."


In my case it happened because of too little memory and CPU limits


I was affected by same issue on one node in GKE 1.8.1 cluster (other nodes were fine). I did following:

  1. Make sure your node pool has some headroom to receive all pods scheduled on affected node. When in doubt, increase node pool by 1.
  2. Drain affected node following this manual:

    kubectl drain <node>
    

    You may run into warnings about daemonsets or pods with local storage, proceed with operation.

  3. Power down affected node in Compute Engine. GKE should schedule replacement node if your pool size is smaller than specified in pool description.