Why don't linux distributions default to mounting tmpfs with infinite inodes?

Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.

Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.

Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.

Edit: narrowing the answer to the updated question.

With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inodes parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:

if nr_inodes=0, inodes will not be limited.  It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.

However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:

size:      The limit of allocated bytes for this tmpfs instance. The 
           default is half of your physical RAM without swap. If you
           oversize your tmpfs instances the machine will deadlock
           since the OOM handler will not be able to free that memory.

Many people will be more concerned about the default amount of memory to an amount that matches with what their application demands.