NLB Target Group health checks are out of control

Update: this has been answered on the related aws forum post which confirms that it's normal behaviour for network load balancers and cites their distributed nature as the reason. There is no way to configure a custom interval. At this moment, the docs are still out of date and specify otherwise.


This is either a bug in NLB Target Groups, or normal behaviour with incorrect documentation. I've come to this conclusion because:

  • I verified that the health checks are coming from the NLB
  • The configuration options are greyed out on the console
    • inferring that AWS know about or imposed this limitation
  • The same results are being observed by others
  • The documentation is specifically for Network Load Balancers
  • AWS docs commonly lead you on a wild goose chase

In this case I think it might be normal behaviour that's been documented incorrectly, but there's no way of verifying that unless someone from AWS can, and it's almost impossible to get an answer for an issue like this on the aws forum.

It would be useful to be able to configure the setting, or at least have the docs updated.


Edit: Just wanted to share an update on this now in Sept 2021. If you are using an NLB you may get an email similar to this:

We are contacting you regarding an upcoming change to your Network Load Balancer(s). Starting on September 9, 2021, we will upgrade NLB's target health checking system. The upgraded system offers faster failure identification, improves target health state accuracy, and allows ELB to weight out of impacted Availability Zones during partial failure scenarios.

As a part of this update, you may notice that there is less health check traffic to backend targets, reducing the targets NetworkIn/Out metrics, as we have removed redundant health checks.

I expect this should resolve the issues with targets receiving many healthchecks when using NLB.

Previous answer:

AWS employee here. To elaborate a bit on the accepted answer, the reason you may see bursts of health check requests is that NLB uses multiple distributed health checkers to evaluate target health. Each of these health checkers will make a request the target at the interval you specify, but all of them are going to make a request to it at that interval, so you will see one request from each of the distributed probes. The target health is then evaluated based on how many of the probes were successful.

You can read a very detailed explanation written here by another AWS employee, under "A look at Route 53 health checks": https://medium.com/@adhorn/patterns-for-resilient-architecture-part-3-16e8601c488e

My recommendation for healthchecks is to code healthchecks to be very light. A lot of people make the mistake of overloading their healthcheck to also do things like check the backend database, or run other checks. Ideally a healthcheck for your load balancer is doing nothing but returning a short string like "OK". In this case it should take less than a millisecond for your code to serve the healthcheck request. If you follow this pattern then occasional bursts of 6-8 healthcheck requests should not overload your process.


A bit late to the party on this. But something what works for me is to have my (C++) service spin up a thread dedicated to the health checks coming from ELB. The thread waits for a socket connection and then waits to read from the socket; or encounter an error. It then closes the socket and goes back to waiting for the next health check ping. This is WAY less expensive than having ELB hit up my traffic port all the time. Not only does it make my code think its being attacked, it also spins up all the logistics and such needed to service a real client.