Zip an entire directory on S3

No, there is no magic bullet.

(As an aside, you have to realize that there is no such thing as a "directory" in S3. There are only objects with paths. You can get directory-like listings, but the '/' character isn't magic - you can get prefixes by any character you want.)

As someone pointed out, "pre-zipping" them can help both download speed and append speed. (At the expense of duplicate storage.)

If downloading is the bottleneck, it sounds like your are downloading serially. S3 can support 1000's of simultaneous connections to the same object without breaking a sweat. You'll need to run benchmarks to see how many connections are best, since too many connections from one box might get throttled by S3. And you may need to do some TCP tuning when doing 1000's of connections per second.

The "solution" depends heavily on your data access patterns. Try re-arranging the problem. If your single-file downloads are infrequent, it might make more sense to group them 100 at a time into S3, then break them apart when requested. If they are small files, it might make sense to cache them on the filesystem.

Or it might make sense to store all 5000 files as one big zip file in S3, and use a "smart client" that can download specific ranges of the zip file in order to serve the individual files. (S3 supports byte ranges, as I recall.)

I agree with @BraveNewCurrency answer.
You would need your own server to do this effectively as AWS S3 is just a key-value storage in the real sense.
Command line tools will not work, as there are too many files and arguments.

You do however have some options that might not be so free or easy to setup.

I am actually involved with a cheap commercial project that just does that. They provide both an API, and an option to launch your own pre-configured EC2 zipper server.

Large migrations(Terabyte->Petabyte-scale)
AWS Snowball

You can also build your own servers using the following free packages(JavaScript & Go(Golang)):