What is the endianness of binary literals in C++14?

Short answer: there isn't one. Write the number the way you would write it on paper.

Long answer: Endianness is never exposed directly in the code unless you really try to get it out (such as using pointer tricks). 0b0111 is 7, it's the same rules as hex, writing

int i = 0xAA77;

doesn't mean 0x77AA on some platforms because that would be absurd. Where would the extra 0s that are missing go anyway with 32-bit ints? Would they get padded on the front, then the whole thing flipped to 0x77AA0000, or would they get added after? I have no idea what someone would expect if that were the case.

The point is that C++ doesn't make any assumptions about the endianness of the machine*, if you write code using primitives and the literals it provides, the behavior will be the same from machine to machine (unless you start circumventing the type system, which you may need to do).

To address your update: the number will be the way you write it out. The bits will not be reordered or any such thing, the most significant bit is on the left and the least significant bit is on the right.


There seems to be a misunderstanding here about what endianness is. Endianness refers to how bytes are ordered in memory and how they must be interpretted. If I gave you the number "4172" and said "if this is four-thousand one-hundred seventy-two, what is the endianness" you can't really give an answer because the question doesn't make sense. (some argue that the largest digit on the left means big endian, but without memory addresses the question of endianness is not answerable or relevant). This is just a number, there are no bytes to interpret, there are no memory addresses. Assuming 4 byte integer representation, the bytes that correspond to it are:

        low address ----> high address
Big endian:    00 00 10 4c
Little endian: 4c 10 00 00

so, given either of those and told "this is the computer's internal representation of 4172" you could determine if its little or big endian.

So now consider your binary literal 0b0111 these 4 bits represent one nybble, and can be stored as either

              low ---> high
Big endian:    00 00 00 07
Little endian: 07 00 00 00

But you don't have to care because this is also handled by the hardware, the language dictates that the compiler reads from left to right, most significant bit to least significant bit

Endianness is not about individual bits. Given that a byte is 8 bits, if I hand you 0b00000111 and say "is this little or big endian?" again you can't say because you only have one byte (and no addresses). Endianness doesn't pertain to the order of bits in a byte, it refers to the ordering of entire bytes with respect to address(unless of course you have one-bit bytes).

You don't have to care about what your computer is using internally. 0b0111 just saves you the time from having to write stuff like

unsigned int mask = 7; // only keep the lowest 3 bits

by writing

unsigned int mask = 0b0111;

Without needing to comment explaining the significance of the number.


* In c++20 you can check the endianness using std::endian.


All integer literals, including binary ones are interpreted in the same way as we normally read numbers (left most digit being most significant).

The C++ standard guarantees the same interpretation of literals without having to be concerned with the specific environment you're on. Thus, you don't have to concern yourself with endianness in this context.

Your example of 0b0111 is always equal to seven.

The C++ standard doesn't use terms of endianness in regards to number literals. Rather, it simply describes that literals have a consistent interpretation, and that the interpretation is the one you would expect.

C++ Standard - Integer Literals - 2.14.2 - paragraph 1

An integer literal is a sequence of digits that has no period or exponent part, with optional separating single quotes that are ignored when determining its value. An integer literal may have a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence of digits is the most significant. A binary integer literal (base two) begins with 0b or 0B and consists of a sequence of binary digits. An octal integer literal (base eight) begins with the digit 0 and consists of a sequence of octal digits. A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits. A hexadecimal integer literal (base sixteen) begins with 0x or 0X and consists of a sequence of hexadecimal digits, which include the decimal digits and the letters a through f and A through F with decimal values ten through fifteen. [Example: The number twelve can be written 12, 014, 0XC, or 0b1100. The literals 1048576, 1’048’576, 0X100000, 0x10’0000, and 0’004’000’000 all have the same value. — end example ]

Wikipedia describes what endianness is, and uses our number system as an example to understand big-endian.

The terms endian and endianness refer to the convention used to interpret the bytes making up a data word when those bytes are stored in computer memory.

Big-endian systems store the most significant byte of a word in the smallest address and the least significant byte is stored in the largest address (also see Most significant bit). Little-endian systems, in contrast, store the least significant byte in the smallest address.

An example on endianness is to think of how a decimal number is written and read in place-value notation. Assuming a writing system where numbers are written left to right, the leftmost position is analogous to the smallest address of memory used, and rightmost position the largest. For example, the number one hundred twenty three is written 1 2 3, with the hundreds place left-most. Anyone who reads this number also knows that the leftmost digit has the biggest place value. This is an example of a big-endian convention followed in daily life.

In this context, we are considering a digit of an integer literal to be a "byte of a word", and the word to be the literal itself. Also, the left-most character in a literal is considered to have the smallest address.

With the literal 1234, the digits one, two, three and four are the "bytes of a word", and 1234 is the "word". With the binary literal 0b0111, the digits zero, one, one and one are the "bytes of a word", and the word is 0111.

This consideration allows us to understand endianness in the context of the C++ language, and shows that integer literals are similar to "big-endian".


You're missing the distinction between endianness as written in the source code and endianness as represented in the object code. The answer for each is unsurprising: source-code literals are bigendian because that's how humans read them, in object code they're written however the target reads them.

Since a byte is by definition the smallest unit of memory access I don't believe it would be possible to even ascribe an endianness to any internal representation of bits in a byte -- the only way to discover endianness for larger numbers (whether intentionally or by surprise) is by accessing them from storage piecewise, and the byte is by definition the smallest accessible storage unit.