Memory usage of Python base types (particulary int and float)
In short, it all boils down to how Python represents arbitrary long integers.
float() types are represented (limited) just as C
In CPython implementation, every object (source) begins with a reference count and a pointer to the type object for that object. That's 16 bytes.
Float object stores its data as C
double (source), that's 8 bytes. So 16 + 8 = 24 bytes for float objects.
With integers, situation is more complicated. The integer objects are represented as variable sized object (source), which for 16 bytes adds another 8 bytes. Digits are represented as array. Depending on the platform, Python uses either 32-bit unsigned integer arrays with 30-bit digits or 16-bit unsigned integer arrays with 15-bit digits. So for small integers there's only one 32bit integer in the array, so add another 4 bytes = 16 + 8 + 4 = 28 bytes.
If you want to represent larger integer number, the size will grow:
sys.getsizeof(int(2**32)) # prints 32 (24 + 2*4 bytes) sys.getsizeof(int(2**64)) # prints 36 (24 + 3*4 bytes)
sys.getsizeof(int) you're getting the size of the class, not of an instance of the class. That's same for
print(type(int)) # prints <class 'type'>
If you look into the source, there's lot of stuff under the hood. In my version of Python 3.6.9 (Linux/64bit) this prints 400 bytes.
Looking at the docs, it's important to observe that:
Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.
So what can you infer from the fact that the returned value of
sys.getsizeof(int(1)) is greater than that of the
Simply that it takes more memory to represent an
int than it does to represent a
float. Is this surprising? Well, possibly not, if we can expect to "do more things" with an
int than we can do with a
float. We can gauge the "amount of functionality" to the first degree by looking at the number of their attributes:
>>> len(dir(int)) 70 >>> len(dir(float)) 57