Why is the inode table usually not resizable?

Say you did make the inode table a file; then the next question is... where do you store information about that file? You'd thus need "real" inodes and "extended" inodes, like an MS-DOS partition table. Given, you'd only need one (or maybe a few — e.g., to also have your journal be a file). But you'd actually have special cases, different code. Any corruption to that file would be disastrous, too. And consider that, before journaling, it was common for files that were being written e.g., when the power went out to be heavily damaged. Your file operations would have to be a lot more robust vs. power failure/crash/etc. than they were on, e.g., ext2.

Traditional Unix filesystems found a simpler (and more robust) solution: put an inode block (or group of blocks) every X blocks. Then you find them by simple arithmetic. Of course, then it's not possible to add more (without restructuring the entire filesystem). And even if you lose/corrupt the inode block you were writing to when the power failed, that's only losing a few inodes — far better than a substantial portion of the filesystem.

More modern designs use things like B-tree variants. Modern filesystems like btrfs, XFS, and ZFS do not suffer from inode limits.

Many filesystems do have a dynamically allocatable inode table (or its moral equivalent) (XFS, BTRFS, ZFS, VxFS...)

The original Unix UFS though had inodes that were fixed at filesystem creation time and filesystems derived from it (Linux EXT, Solaris UFS) often continued the scheme. It's robust and simpler to implement. So many use cases are a good fit, that designing a new filesystem just to avoid that one problem isn't easy to justify.

There are filesystems that allocate inodes dynamically: off the top of my head, at least Veritas VxFS (= the default filesystem of HP-UX, and one of the choices available on Solaris) and XFS (the standard filesystem type on RHEL 7) work that way. Btrfs and IBM's JFS too.