Kubernetes pod/deployment while passing args to container?

In terms of specific fields:

  • Kubernetes's command: matches Docker's "entrypoint" concept, and whatever is specified here is run as the main process of the container. You don't need to specify a command: in a pod spec if your Dockerfile has a correct ENTRYPOINT already.
  • Kubernetes's args: matches Docker's "command" concept, and whatever is specified here is passed as command-line arguments to the entrypoint.
  • Environment variables in both Docker and Kubernetes have their usual Unix semantics.
  • Dockerfile ARG specifies a build-time configuration setting for an image. The expansion rules and interaction with environment variables are a little odd. In my experience this has a couple of useful use cases ("which JVM version do I actually want to build against?"), but every container built from an image will have the same inherited ARG value; it's not a good mechanism for run-time configuration.
  • For various things that could be set in either the Dockerfile or at runtime (ENV variables, EXPOSEd ports, a default CMD, especially VOLUME) there's no particular need to "declare" them in the Dockerfile to be able to set them at run time.

There are a couple of more-or-less equivalent ways to do what you're describing. (I will use docker run syntax for the sake of compactness.) Probably the most flexible way is to have ROLE set as an environment variable; when you run the entrypoint script you can assume $ROLE has a value, but it's worth checking.

#!/bin/sh
# --> I expect $ROLE to be set
# --> Pass some command to run as additional arguments
if [ -z "$ROLE" ]; then
  echo "Please set a ROLE environment variable" >&2
  exit 1
fi
echo "You are running $ROLE version of your app"
exec "$@"
docker run --rm -e ROLE=some_role docker.local:5000/test /bin/true

In this case you can specify a default ROLE in the Dockerfile if you want to.

FROM centos:7.4.1708
COPY ./role.sh /usr/local/bin
RUN chmod a+x /usr/local/bin/role.sh
ENV ROLE="default_role"
ENTRYPOINT ["role.sh"]

A second path is to take the role as a command-line parameter:

#!/bin/sh
# --> pass a role name, then a command, as parameters
ROLE="$1"
if [ -z "$ROLE" ]; then
  echo "Please pass a role as a command-line option" >&2
  exit 1
fi
echo "You are running $ROLE version of your app"
shift        # drops first parameter
export ROLE  # makes it an environment variable
exec "$@"
docker run --rm docker.local:5000/test some_role /bin/true

I would probably prefer the environment-variable path both for it being a little easier to supply multiple unrelated options and to not mix "settings" and "the command" in the "command" part of the Docker invocation.

As to why your pod is "crashing": Kubernetes generally expects pods to be long-running, so if you write a container that just prints something and exits, Kubernetes will restart it, and when it doesn't stay up, it will always wind up in CrashLoopBackOff state. For what you're trying to do right now, don't worry about it and look at the kubectl logs of the pod. Consider setting the pod spec's restart policy if this bothers you.