How to avoid robots from indexing pages of my app through alternate URLs?

It happen that Googlebot discovers the alternate URL. The official way if you want your application to be accessible through all those URLs for internal purposes, is to use canonical links as you suggest.

The other way is to simply issue 301 Permanent Redirect so that all traffic uses the official URLs. This will also allow you to enforce the use of HTTPS as it also happens that Googlebot indexes both the HTTP and HTTPS version when they exist and there is no redirect between them.

Given that you control the site, you are even free to not issue the redirect for internal traffic. This is common if the internal server does not have an SSL certificate. You can redirect when the IP address is internal to your network, for example.

Robots is generally not a good option because bots can choose to ignore it. Googlebot will respect it but you may have to cover for other bots.


those pages/URLs should NOT be indexed

Well, they should be indexed, but not at the other hostnames.

You should certainly implement OPTION#1 and configure a rel="canonical" element. This should also protect you against non-canonical query strings (intentional or malicious).

AND, ideally, you would 301 redirect any requests for non-canonical hostnames to the canonical one.

Blocking requests with robots.txt does not necessarily prevent indexing if the URLs are linked to. And if the non-canonical URL does get linked to you won't benefit from any SEO if crawling is blocked. You certainly should not block with robots.txt if you implement external redirects, as the redirects will not be followed.