I can't deploy a new container to my ECS cluster because of ports in use

While the other answers are correct, I don't think they apply to the problem you have. I say this because it's a problem my team has faced as well, and doesn't really have anything to do with trying to launch multiple containers on the same instance - if I understand correctly, you're just trying to replace the existing container from an updated task definition. If you want to put multiple copies of the same container on a single box, definitely look at the suggestions from the other answers (in addition to the details below), but for rolling deploys, dynamic ports are by no means required.

[[ Side note for completeness: it's possible that your forced deploy threw the error you posted because it just takes a while for EC2 to clean up resources stopped by ECS. You'll see the same sort of issue if you're trying to force stop / start a task -- we've seen similar errors when trying to restart a container that was configured to allocate >50% of the available instance memory. You'll get those types of resource errors until the EC2 instance is completely cleaned up, and reported back to ECS. I've seen this take upwards of 5 minutes. ]]

To your question then, unfortunately for now there aren't any great built-in mechanics from AWS for performing a rolling restart of tasks. However, you can do rolling deploys.

As you're probably aware already, your Service relies on a task definition that's specified. Note that it's reliant on the task definition number, and doesn't care about the container tags in the way that the EC2 instance will.

The below settings are where the magic happens for enabling rolling deploys; you can find these configuration options in your service settings.

magic

For you to be able to do rolling deploys, you have to have at least 2 tasks running.

  • Number of tasks -- The number of tasks your service wants to run (n)
  • Minimum healthy percent -- The minimum healthy % of n when deploying new tasks
  • Maximum percent -- The maximum % of n that can be added when deploying new tasks

So for a real example, let's assume you have the following configuration:

Number of tasks: 3
Minimum healthy percent:  50
Maximum percent: 100

If you change the task definition that your service is pointing at, it will initiate a rolling deploy. We have 3 running tasks, but allow for >=50% healthy. ECS will kill one of your tasks, making the healthy % drop to 66%, still above 50%. Once the new task comes up, the service is again at 100%, and ECS can continue with rolling the deploy to the next instance.

Likewise, if you had a configuration where minimum % == 100, and maximum % == 150 (assuming you have capacity), ECS will launch an additional task; once it's up, you have a healthy percent of 133%, and it can safely kill one of the old tasks. This process continues until your new task is fully deployed.


When using ECS (or any other orchestrator) it is encouraged that you use Dynamic Port Mapping.

Basically ECS will assign a random unassigned port to your container. ECS then offers ways to retrieve that port number, using the agent instrospection API or the docker client itself. However I wouldn't try to retrieve the port, I would instead rely on an Application Load Balancer (ALB) which allows you to use a single endpoint to access any targeted containers independently of its dynamically assigned port. When updating your service the ALB will seamlessly transition to the newest version of the container without any disruption.

Finally, inside the container the local port will remain the same so you don't have to handle things differently.


Without dynamic ports, only one instance of a service can be deployed per container as the port being used by the instance cannot be used by any other instance. When you update the service, it will try to restart all its instances and if more than one instances are being started on a single EC2 container, the startup will fail.

Better to use docker containers with dynamic port mapping in ECS cluster.