Kubernetes node.js container cannot connect to MongoDB Atlas

Firstly, Pods are deployed on nodes in a cluster and not on your service (so Mongo won't recognise your service endpoints e.g; load balancer IP). Based on this, there are two solutions:

solution A

  1. Add the endpoint of the Kubernetes cluster to the MongoDB network access IP whitelist.

  2. Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with value set to Default. Hence, your Pods (your container basically) will connect to mongo through the name resolution configuration of the master node.

solution B

  1. Add all node endpoints in your k8s cluster to the MongoDB network access IP whitelist.

  2. Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with the value set to ClusterFirstWithHostNet. Since the pods run with your host network, they gain access to services listening on localhost.

Kubernetes documentation


I figured out the issue, my pod DNS was not configured to allow external connections, so I set dnsPolicy: Default in my YML, because oddly enough Default is not actually the default value


I use MongoDB Atlas from Kubernetes but on AWS. Although for testing purposes you can enable all IP addresses and test, here is the approach for a production setup:

  • MongoDB Atlas supports Network Peering
  • Under Network Access > New Peering Connection
  • In the case of AWS, VPC ID, CIDR and Region have to be specified. For GCP it should be the standard procedure used for VPC peering.