Why does C# System.Decimal (decimal) "waste" bits?

Based on Kevin Gosse's comment

For what it's worth, the decimal type seems to predate .net. The .net framework CLR delegates the computations to the oleaut32 lib, and I could find traces of the DECIMAL type as far back as Windows 95

I searched further and found a likely user of the DECIMAL code in oleauth32 Windows 95.

The old Visual Basic (non .NET based) and VBA have a sort-of-dynamic type called 'Variant'. In there (and only in there) you could save something nearly identical to our current System.Decimal.

Variant is always 128 bits with the first 16 bits reserved for an enum value of which data type is inside the Variant.

The separation of the remaining 112 bits could be based on common CPU architectures in the early 90'ies or ease of use for the Windows programmer. It sounds sensible to not pack exponent and sign in one byte just to have one more byte available for the integer.

When .NET was built the existing (low level) code for this type and it's operations was reused for System.Decimal.

Nothing of this is 100% verified and I would have liked the answer to contain more historical evidence but that's what I could puzzle together.