Kubernetes - Deploying Multiple Images into a single Pod

Yes, you just add entries to the containers section in your yaml file, example:

apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
    restartPolicy: Never
containers:
    - name: nginx-container
      image: nginx
    - name: debian-container
      image: debian

Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.

Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:

  • Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.

  • If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).

  • More kubernetes philosophy approach here would be to:

    • Create deployment for backend
    • Create service for backend (exposing necessary ports)
    • Create deployment for frontend
    • Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
    • Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...

Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...