Python Memory Model

In your first example you are storing the same integer len(arr) times. So python need just store the integer once in memory and refers to it len(arr) times.

In your second example, you are storing len(arr) different integers. Now python must allocate storage for len(arr) integers and refer to to them in each of the len(arr) slots.


Java special-cases a few value types (including integers) so that they're stored by value (instead of, by object reference like everything else). Python doesn't special-case such types, so that assigning n to many entries in a list (or other normal Python container) doesn't have to make copies.

Edit: note that the references are always to objects, not "to variables" -- there's no such thing as "a reference to a variable" in Python (or Java). For example:

>>> n = 23
>>> a = [n,n]
>>> print id(n), id(a[0]), id(a[1])
8402048 8402048 8402048
>>> n = 45
>>> print id(n), id(a[0]), id(a[1])
8401784 8402048 8402048

We see from the first print that both entries in list a refer to exactly the same object as n refers to -- but when n is reassigned, it now refers to a different object, while both entries in a still refer to the previous one.

An array.array (from the Python standard library module array) is very different from a list: it keeps compact copies of a homogeneous type, taking as few bits per item as are needed to store copies of values of that type. All normal containers keep references (internally implemented in the C-coded Python runtime as pointers to PyObject structures: each pointer, on a 32-bit build, takes 4 bytes, each PyObject at least 16 or so [including pointer to type, reference count, actual value, and malloc rounding up]), arrays don't (so they can't be heterogeneous, can't have items except from a few basic types, etc).

For example, a 1000-items container, with all items being different small integers (ones whose values can fit in 2 bytes each), would take about 2,000 bytes of data as an array.array('h'), but about 20,000 as a list. But if all items were the same number, the array would still take 2,000 bytes of data, the list would take only 20 or so [[in every one of these cases you have to add about another 16 or 32 bytes for the container-object proper, in addition to the memory for the data]].

However, although the question says "array" (even in a tag), I doubt its arr is actually an array -- if it were, it could not store (2**32)*2 (largest int values in an array are 32 bits) and the memory behavior reported in the question would not actually be observed. So, the question is probably in fact about a list, not an array.

Edit: a comment by @ooboo asks lots of reasonable followup questions, and rather than trying to squish the detailed explanation in a comment I'm moving it here.

It's weird, though - after all, how is the reference to the integer stored? id(variable) gives an integer, the reference is an integer itself, isn't it cheaper to use the integer?

CPython stores references as pointers to PyObject (Jython and IronPython, written in Java and C#, use those language's implicit references; PyPy, written in Python, has a very flexible back-end and can use lots of different strategies)

id(v) gives (on CPython only) the numeric value of the pointer (just as a handy way to uniquely identify the object). A list can be heterogeneous (some items may be integers, others objects of different types) so it's just not a sensible option to store some items as pointers to PyObject and others differently (each object also needs a type indication and, in CPython, a reference count, at least) -- array.array is homogeneous and limited so it can (and does) indeed store a copy of the items' values rather than references (this is often cheaper, but not for collections where the same item appears a LOT, such as a sparse array where the vast majority of items are 0).

A Python implementation would be fully allowed by the language specs to try subtler tricks for optimization, as long as it preserves semantics untouched, but as far as I know none currently does for this specific issue (you could try hacking a PyPy backend, but don't be surprised if the overhead of checking for int vs non-int overwhelms the hoped-for gains).

Also, would it make a difference if I assigned 2**64 to every slot instead of assigning n, when n holds a reference to 2**64? What happens when I just write 1?

These are examples of implementation choices that every implementation is fully allowed to make, as it's not hard to preserve the semantics (so hypothetically even, say, 3.1 and 3.2 could behave differently in this regard).

When you use an int literal (or any other literal of an immutable type), or other expression producing a result of such a type, it's up to the implementation to decide whether to make a new object of that type unconditionally, or spend some time checking among such objects to see if there's an existing one it can reuse.

In practice, CPython (and I believe the other implementations, but I'm less familiar with their internals) uses a single copy of sufficiently small integers (keeps a predefined C array of a few small integer values in PyObject form, ready to use or reuse at need) but doesn't go out of its way in general to look for other existing reusable objects.

But for example identical literal constants within the same function are easily and readily compiled as references to a single constant object in the function's table of constants, so that's an optimization that's very easily done, and I believe every current Python implementation does perform it.

It can sometimes be hard to remember than Python is a language and it has several implementations that may (legitimately and correctly) differ in a lot of such details -- everybody, including pedants like me, tends to say just "Python" rather than "CPython" when talking about the popular C-coded implementation (excepts in contexts like this one where drawing the distinction between language and implementation is paramount;-). Nevertheless, the distinction is quite important, and well worth repeating once in a while.