How to delete only unmounted PVCs and PVs?

Not very elegant but bash way to delete Released PV's

kubectl get pv | grep Released | awk '$1 {print$1}' | while read vol; do kubectl delete pv/${vol}; done

Looking through the current answers it looks like most of these don't directly answer the question (I could be mistaken). A PVC that is Bound is not the same as Mounted. The current answers should suffice to clean up Unbound PVC's, but finding and cleaning up all Unmounted PVC's seems unanswered.

Unfortunately it looks like the -o=go-template=... doesn't have a variable for Mounted By: as shown in kubectl describe pvc.

Here's what I've come up with after some hacking around:

To list all PVC's in a cluster (mounted and not mounted) you can do this: kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$"

The -A will return every PVC in the cluster in every namespace. We then filter down to show just the Name, Namespace and Mounted By fields.

The best I could come up with to then get the names and namespaces of all unmounted PVC's is this:

kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$"

To actually delete the PVC's is somewhat difficult because we need to know the name of the PVC as well as it's namespace. We use cut, paste and xargs to do this:

kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$" | cut -f2 -d: | paste -d " " - - | xargs -n2 bash -c 'kubectl -n ${1} delete pvc ${0}'
  • cut removes Name: and Namespace: since they just get in the way
  • paste puts the Name of the PVC and it's Namespace on the same line
  • xargs -n bash makes it so the PVC name is ${0} and the namespace is ${1}.

I admit that I have a feeling that this isn't the best way to do this but it was the only obvious way I could come up with (on the CLI) to do this.

After running this your volumes will go from Bound to Unbound and the other answers in this thread have good ideas on how to clean those up.

Also, keep in mind that some of the volume controllers don't actually delete your data when the volumes are deleted in Kubernetes. You might still need to clean that up in whichever system you are using.

For example, in the NFS controller the data gets renamed with an archived- prefix and on the NFS side you can run rm -rf /persistentvolumes/archived-*. For AWS EBS you might still need to delete the EBS volumes if they are detached from any instance.

I hope this helps!