# Explain a surprising parity in the rounding direction of apparent ties in the interval [0, 1]

Not an answer, but just want to flesh out what's puzzling about it. It's certainly not "random", but noting that isn't enough ;-) Just look at the 2-digit case for concreteness:

```
>>> from decimal import Decimal as D
>>> for i in range(5, 100, 10):
... print('%2d' % i, D(i / 100))
5 0.05000000000000000277555756156289135105907917022705078125
15 0.1499999999999999944488848768742172978818416595458984375
25 0.25
35 0.34999999999999997779553950749686919152736663818359375
45 0.450000000000000011102230246251565404236316680908203125
55 0.5500000000000000444089209850062616169452667236328125
65 0.65000000000000002220446049250313080847263336181640625
75 0.75
85 0.84999999999999997779553950749686919152736663818359375
95 0.9499999999999999555910790149937383830547332763671875
```

Now you can pair `i/100`

with `(100-i)/100`

and their mathematical sum is exactly 1. So this pairs, in the above, 5 with 95, 15 with 85, and so on. The exact machine value for 5 rounds up, while that for 95 rounds down, which "is expected": if the true sum is 1, and one addend "rounds up", then surely the other "rounds down".

But that's not always so. 15 and 85 both round down, 25 and 75 is a mix, 35 and 65 is a mix, but 45 and 55 both round up.

What's at work that makes the total "up" and "down" cases *exactly* balance? Mark showed that they do for `10**3`

, `10**7`

, and `10**9`

, and I verified exact balance holds for exponents 2, 4, 5, 6, 8, 10, and 11 too.

## A puzzling clue

This is *very* delicate. Instead of dividing by `10**n`

, what if we multiplied by its reciprocal instead. Contrast this with the above:

```
>>> for i in range(5, 100, 10):
... print('%2d' % i, D(i * (1 / 100)))
5 0.05000000000000000277555756156289135105907917022705078125
15 0.1499999999999999944488848768742172978818416595458984375
25 0.25
35 0.350000000000000033306690738754696212708950042724609375
45 0.450000000000000011102230246251565404236316680908203125
55 0.5500000000000000444089209850062616169452667236328125
65 0.65000000000000002220446049250313080847263336181640625
75 0.75
85 0.84999999999999997779553950749686919152736663818359375
95 0.95000000000000006661338147750939242541790008544921875
```

Now 7 (instead of 5) cases round up.

For `10**3`

, 64 (instead of 50) round up; for `10**4`

, 828 (instead of 500), for `10**5`

, 9763 (instead of 5000); and so on. So there's *something* vital about suffering no more than one rounding error in computing `i/10**n`

.

It turns out that one can prove something stronger, that has nothing particularly to do with decimal representations or decimal rounding. Here's that stronger statement:

Theorem.Choose a positive integer`n <= 2^1021`

, and consider the sequence of length`n`

consisting of the fractions`1/2n`

,`3/2n`

,`5/2n`

, ...,`(2n-1)/2n`

. Convert each fraction to the nearest IEEE 754 binary64 floating-point value, using the IEEE 754`roundTiesToEven`

rounding direction. Then the number of fractions for which the converted value is larger than the original fraction will exactly equal the number of fractions for which the converted value is smaller than the original fraction.

The original observation involving the sequence `[0.005, 0.015, ..., 0.995]`

of floats then follows from the case `n = 100`

of the above statement: in 96 of the 100 cases, the result of `round(value, 2)`

depends on the sign of the error introduced when rounding to binary64 format, and by the above statement, 48 of those cases will have positive error, and 48 will have negative error, so 48 will round up and 48 will round down. The remaining 4 cases (`0.125, 0.375, 0.625, 0.875`

) convert to `binary64`

format with no change in value, and then the Banker's Rounding rule for `round`

kicks in to round `0.125`

and `0.625`

down, and `0.375`

and `0.875`

up.

**Notation.** Here and below, I'm using pseudo-mathematical notation, not Python notation: `^`

means exponentiation rather than bitwise exclusive or, and `/`

means exact division, not floating-point division.

## Example

Suppose `n = 11`

. Then we're considering the sequence `1/22`

, `3/22`

, ..., `21/22`

. The exact values, expressed in decimal, have a nice simple recurring form:

```
1/22 = 0.04545454545454545...
3/22 = 0.13636363636363636...
5/22 = 0.22727272727272727...
7/22 = 0.31818181818181818...
9/22 = 0.40909090909090909...
11/22 = 0.50000000000000000...
13/22 = 0.59090909090909090...
15/22 = 0.68181818181818181...
17/22 = 0.77272727272727272...
19/22 = 0.86363636363636363...
21/22 = 0.95454545454545454...
```

The nearest exactly representable IEEE 754 binary64 floating-point values are:

```
1/22 -> 0.04545454545454545580707161889222334139049053192138671875
3/22 -> 0.13636363636363635354342704886221326887607574462890625
5/22 -> 0.2272727272727272651575702866466599516570568084716796875
7/22 -> 0.318181818181818176771713524431106634438037872314453125
9/22 -> 0.409090909090909116141432377844466827809810638427734375
11/22 -> 0.5
13/22 -> 0.59090909090909093936971885341336019337177276611328125
15/22 -> 0.68181818181818176771713524431106634438037872314453125
17/22 -> 0.7727272727272727070868540977244265377521514892578125
19/22 -> 0.86363636363636364645657295113778673112392425537109375
21/22 -> 0.954545454545454585826291804551146924495697021484375
```

And we see by direct inspection that when converting to float, 1/22, 9/22, 13/22, 19/22 and 21/22 rounded upward, while 3/22, 5/22, 7/22, 15/22 and 17/22 rounded downward. (11/22 was already exactly representable, so no rounding occurred.) So 5 of the 11 values were rounded up, and 5 were rounded down. The claim is that this perfect balance occurs regardless of the value of `n`

.

## Computational experiments

For those who might be more convinced by numerical experiments than a formal proof, here's some code (in Python).

First, let's write a function to create the sequences we're interested in, using Python's `fractions`

module:

```
from fractions import Fraction
def sequence(n):
""" [1/2n, 3/2n, ..., (2n-1)/2n] """
return [Fraction(2*i+1, 2*n) for i in range(n)]
```

Next, here's a function to compute the "rounding direction" of a given fraction `f`

, which we'll define as `1`

if the closest float to `f`

is larger than `f`

, `-1`

if it's smaller, and `0`

if it's equal (i.e., if `f`

turns out to be exactly representable in IEEE 754 binary64 format). Note that the conversion from `Fraction`

to `float`

is correctly rounded under `roundTiesToEven`

on a typical IEEE 754-using machine, and that the order comparisons between a `Fraction`

and a `float`

are computed using the exact values of the numbers involved.

```
def rounding_direction(f):
""" 1 if float(f) > f, -1 if float(f) < f, 0 otherwise """
x = float(f)
if x > f:
return 1
elif x < f:
return -1
else:
return 0
```

Now to count the various rounding directions for a given sequence, the simplest approach is to use `collections.Counter`

:

```
from collections import Counter
def round_direction_counts(n):
""" Count of rounding directions for sequence(n). """
return Counter(rounding_direction(value)
for value in sequence(n))
```

Now we can put in any integer we like to observe that the count for `1`

always matches the count for `-1`

. Here's a handful of examples, starting with the `n = 100`

example that started this whole thing:

```
>>> round_direction_counts(100)
Counter({1: 48, -1: 48, 0: 4})
>>> round_direction_counts(237)
Counter({-1: 118, 1: 118, 0: 1})
>>> round_direction_counts(24)
Counter({-1: 8, 0: 8, 1: 8})
>>> round_direction_counts(11523)
Counter({1: 5761, -1: 5761, 0: 1})
```

The code above is unoptimised and fairly slow, but I used it to run tests up to `n = 50000`

and checked that the counts were balanced in each case.

As an extra, here's an easy way to visualise the roundings for small `n`

: it produces a string containing `+`

for cases that round up, `-`

for cases that round down, and `.`

for cases that are exactly representable. So our theorem says that each signature has the same number of `+`

characters as `-`

characters.

```
def signature(n):
""" String visualising rounding directions for given n. """
return "".join(".+-"[rounding_direction(value)]
for value in sequence(n))
```

And some examples, demonstrating that there's no *immediately* obvious pattern:

```
>>> signature(10)
'+-.-+++.--'
>>> signature(11)
'+---+.+--++'
>>> signature(23)
'---+++-+-+-.-++--++--++'
>>> signature(59)
'-+-+++--+--+-+++---++---+++--.-+-+--+-+--+-+-++-+-++-+-++-+'
>>> signature(50)
'+-++-++-++-+.+--+--+--+--+++---+++---.+++---+++---'
```

## Proof of the statement

The original proof I gave was unnecessarily complicated. Following a suggestion from Tim Peters, I realised that there's a much simpler one. You can find the old one in the edit history, if you're *really* interested.

The proof rests on three simple observations. Two of those are floating-point facts; the third is a number-theoretic observation.

Observation 1.For any (non-tiny, non-huge) positive fraction`x`

,`x`

rounds "the same way" as`2x`

.

If `y`

is the closest binary64 float to `x`

, then `2y`

is the closest binary64 float to `2x`

. So if `x`

rounds up, so does `2x`

, and if `x`

rounds down, so does `2x`

. If `x`

is exactly representable, so is `2x`

.

Small print: "non-tiny, non-huge" should be interpreted to mean that we avoid the extremes of the IEEE 754 binary64 exponent range. Strictly, the above statement applies for all `x`

in the interval `[-2^1022, 2^1023)`

. There's a corner-case involving infinity to be careful of right at the top end of that range: if `x`

rounds to `2^1023`

, then `2x`

rounds to `inf`

, so the statement still holds in that corner case.

Observation 1 implies that (again provided that underflow and overflow are avoided), we can scale any fraction `x`

by an arbitrary power of two without affecting the direction it rounds when converting to binary64.

Observation 2.If`x`

is a fraction in the closed interval`[1, 2]`

, then`3 - x`

rounds the opposite way to`x`

.

This follows because if `y`

is the closest float to `x`

(which implies that `y`

must also be in the interval `[1.0, 2.0]`

), then thanks to the even spacing of floats within `[1, 2]`

, `3 - y`

is also exactly representable and is the closest float to `3 - x`

. This works even for ties under the roundTiesToEven definition of "closest", since the last bit of `y`

is even if and only if the last bit of `3 - y`

is.

So if `x`

rounds up (i.e., `y`

is greater than `x`

), then `3 - y`

is smaller than `3 - x`

and so `3 - x`

rounds down. Similarly, if `x`

is exactly representable, so is `3 - x`

.

Observation 3.The sequence`1/2n, 3/2n, 5/2n, ..., (2n-1)/2n`

of fractions is equal to the sequence`n/n, (n+1)/n, (n+2)/n, ..., (2n-1)/n`

, up to scaling by powers of two and reordering.

This is just a scaled version of a simpler statement, that the sequence `1, 3, 5, ..., 2n-1`

of integers is equal to the sequence `n, n+1, ..., 2n-1`

, up to scaling by powers of two and reordering. That statement is perhaps easiest to see in the reverse direction: start out with the sequence `n, n+1, n+2, ...,2n-1`

, and then divide each integer by its largest power-of-two divisor. What you're left with must be, in each case, an odd integer smaller than `2n`

, and it's easy to see that no such odd integer can occur twice, so by counting we must get every odd integer in `1, 3, 5, ..., 2n - 1`

, in some order.

With these three observations in place, we can complete the proof. Combining Observation 1 and Observation 3, we get that the cumulative rounding directions (i.e., the total counts of rounds-up, rounds-down, stays-the-same) of `1/2n, 3/2n, ..., (2n-1)/2n`

exactly match the cumulative rounding directions of `n/n, (n+1)/n, ..., (2n-1)/n`

.

Now `n/n`

is exactly one, so is exactly representable. In the case that `n`

is even, `3/2`

also occurs in this sequence, and is exactly representable. The rest of the values can be paired with each other in pairs that add up to `3`

: `(n+1)/n`

pairs with `(2n-1)/n`

, `(n+2)/n`

pairs with `(2n-2)/n`

, and so-on. And now by Observation 2, within each pair either one value rounds up and one value rounds down, or both values are exactly representable.

So the sequence `n/n, (n+1)/2n, ..., (2n-1)/n`

has exactly as many rounds-down cases as rounds-up cases, and hence the original sequence `1/2n, 3/2n, ..., (2n-1)/2n`

has exactly as many rounds-down cases as rounds-up cases. That completes the proof.

Note: the restriction on the size of `n`

in the original statement is there to ensure that none of our sequence elements lie in the subnormal range, so that Observation 1 can be used. The smallest positive binary64 normal value is `2^-1022`

, so our proof works for all `n <= 2^1021`

.

Not an answer, but a further comment.

I am working on the assumption that:

the results of the original

`n/1000`

will have been rounded to either less than or more than the exact fractional value, by calculating an extra bit of precision and then using the 0 or 1 in that extra bit to determine whether to round up or down (binary equivalent of Banker's rounding)`round`

is somehow comparing the value with the exact fractional value, or at least acting as if it is doing so (for example, doing the multiply-round-divide while using more bits of precision internally, at least for the multiply)taking it on trust from the question that half of the

*exact*fractions can be shown to round up and the other half down

If this is the case, then the question is equivalent to saying:

- if you write the fractions as binimals, how many of them have a 1 in the
*i*'th place (where the*i*'th place corresponds to the place*after*the final bit stored, which according to my assumptions will have been used to decide which way to round the number)

With this in mind, here is some code that will calculate arbitrary precision binimals, then sum the *i*'th bit of these binimals (for the non-exact cases) and add on half the number of non-exact cases.

```
def get_binimal(x, y, places=100,
normalise=True):
"""
returns a 2-tuple containing:
- x/y as a binimal, e.g. for
x=3, y=4 it would be 110000000...
- whether it is an exact fraction (in that example, True)
if normalise=True then give fractional part of binimal that starts
with 1. (i.e. IEEE mantissa)
"""
if x > y:
raise ValueError("x > y not supported")
frac = ""
val = x
exact = False
seen_one = False
if normalise:
places += 1 # allow for value which is always 1 (remove later)
while len(frac) < places:
val *= 2
if val >= y:
frac += "1"
val -= y
seen_one = True
if val == 0:
exact = True
else:
if seen_one or not normalise:
frac += "0"
if normalise:
frac = frac[1:] # discard the initial 1
return (frac, exact)
places = 100
n_exact = 0
n = 100
divisor = n * 10
binimals = []
for x in range(5, divisor, 10):
binimal, exact = get_binimal(x, divisor, places, True)
print(binimal, exact, x, n)
if exact:
n_exact += 1
else:
binimals.append(binimal)
for i in range(places):
print(i, n_exact // 2 + sum((b[i] == "1") for b in binimals))
```

Running this program gives for example:

```
0 50
1 50
2 50
3 50
4 50
5 50
6 50
7 50
8 50
... etc ...
```

Some observations from the results of, namely:

It is confirmed (from results shown plus experimenting with other values of

`n`

) that this gives the same counts as observed in the question (i.e.`n/2`

), so the above hypothesis seems to be working.The value of

`i`

does not matter, i.e. there is nothing special about the 53 mantissa bits in IEEE 64-bit floats -- any other length would give the same.It does not matter whether the numbers are normalised or not. See the

`normalise`

argument to my`get_binimal`

function); if this is set to`True`

, then the returned value is analogous to a normalised IEEE mantissa, but the counts are unaffected.

Clearly the binimal expansions will consist of repeating sequences, and the fact that *i* does not matter is showing that the sequences must be aligned in such a way that the sum of *i*'th digits is always the same because there are equal numbers with each alignment of the repeating sequence.

Taking the case where n=100, and showing counts of the last 20 bits of each of the expansions (i.e. bits 80-99 because we asked for 100 places) using:

```
counts = collections.Counter([b[-20:] for b in binimals])
pprint.pprint(counts.items())
```

gives something like the following, although here I have hand-edited the ordering so as to show the repeating sequences more clearly:

```
[('00001010001111010111', 4),
('00010100011110101110', 4),
('00101000111101011100', 4),
('01010001111010111000', 4),
('10100011110101110000', 4),
('01000111101011100001', 4),
('10001111010111000010', 4),
('00011110101110000101', 4),
('00111101011100001010', 4),
('01111010111000010100', 4),
('11110101110000101000', 4),
('11101011100001010001', 4),
('11010111000010100011', 4),
('10101110000101000111', 4),
('01011100001010001111', 4),
('10111000010100011110', 4),
('01110000101000111101', 4),
('11100001010001111010', 4),
('11000010100011110101', 4),
('10000101000111101011', 4),
('00110011001100110011', 4),
('01100110011001100110', 4),
('11001100110011001100', 4),
('10011001100110011001', 4)]
```

There are:

- 80 (=4 * 20) views of a 20-bit repeating sequence
- 16 (=4 * 4) views of a 4-bit repeating sequence corresponding to division by 5 (for example 0.025 decimal = (1/5) * 2^-3)
- 4 exact fractions (not shown), for example 0.375 decimal (= 3 * 2^-3)

As I say, this is **not claiming to be a full answer**.

The **really intriguing thing** is that this result does not seem to be disrupted by normalising the numbers. Discarding the leading zeros will certainly change the alignment of the repeating sequence for individual fractions (shifting the sequence by varying number of bits depending how many leading zeros were ignored), but it is doing so in such a way that the total count for each alignment is preserved. I find this possibly the most curious part of the result.

**And another curious thing** - the 20-bit repeating sequence consists of a 10-bit sequence followed by its ones complement, so just e.g. the following two alignments in equal numbers would give the same total in every bit position:

```
10111000010100011110
01000111101011100001
```

and similarly for the 4-bit repeating sequence. BUT the result does not seem to depend on this - instead all 20 (and all 4) alignments are present in equal numbers.