Why not implement 1Gbps, when all I need is 20Mbps?

TTL (single-ended, unterminated) signals can easily handle 20 Mbps or more — look at SPI, for example. If you're only going a few inches, ribbon cable and IDC connectors (or a backplane of some sort) will get you from board to board.

1 Gbps puts you into the realm of having to deal with impedance-controlled traces, connectors and cables. The receivers will need to use PLL/DLL techniques to maintain synchronization and separate clock/data, whereas at the slower speed, normal synchronous logic will be sufficient. The 50× overkill and the additional headaches are simply not worth it, if you're sure that 20 Mbps will suffice for the foreseeable future.


I once designed (25 or so years ago) a custom serial bus protocol for board-to-board control and status among boards in a telecom rack. Sort of a cross between I2C and SPI — unidirectional signals like SPI, but embedded device addresses like I2C.


A few reasons:

Power

Faster speed means more power. Not only do you need faster analog circuits, which will consumer more power, all your electronics surrounding them need to be faster. Your digital systems, your latches, clock management, etc. If you get that 1 Gbps by using multilevel signalling you now need better ADCs and DACs. You might need to start dealing with more complex filtering. You could start requiring FEC which also needs to keep up.

Chip size

Faster means more going on. You need better clock stability, which means bigger circuits. You need better timing, which means a more complex clock recovery system. You might need to switch to using DSP to do channel equalization. Your potentially needed FEC needs chipspace.

Environment sensitivity

If you switch from a few tens of megabaud to whatever is needed for gigabit, you will become far more sensitive to the environment. Small mismatches which might be unnoticeable at a few tens of MHz become resonant stubs at higher frequencies. Reflections might start causing intermittent performance. A nicked cable due to abuse over the years (I don't know the application environment for your product) might be fine for lower speeds, but cause poor performance when you go higher.

Design effort

I think it is obvious from all of the additional issues I discussed above that the time and effort of designing a faster communication link is significant. This alone should be enough of a reason.

EMI

Faster speed means meeting EMI requirements could be harder.


The obvious question is, "Does 1 Gbps mean 1000BASET Ethernet?" If that's what the customer is thinking, your requirement that, "we don't have room for things like magnetics" rules that out right away. Ethernet does use magnetics on the physical layer, and when I designed an interface some years ago the magnetics were part of a roughly 1 inch cube.

You say you're using FPGAs, but you don't say whose. If you're going with Xilinx, you should be aware that the current models natively support LVDS, which would seem ideal for your purpose. Early LVDS systems (hi-def televisions) ran at 122 Mbps, and the technology can go well over a Gbps if you really need to. Being differential, and assuming your two boards are not using wildly floating grounds, noise immunity is excellent.

As for your specific choice of clock frequencies, adding more headroom than you think you need is one of those decisions which can save your bacon in the future, so I wouldn't rule out picking something like 100 MHz, but that's up to you. You might acquaint your customer with Roberge's Law (Jim Roberge was a well-known electrical engineering professor at MIT a few decades ago): "Those who ask for more bandwidth than they need deserve what they get." Granted, he was talking about servo systems, but the principle remains good over a remarkably wide range of disciplines.