Why do people use (1 << PA0) when setting port?

PA0 will be defined as 0 so the following line:

DDRA |= (1 << PA0);

Equates to shifting 1 left by zero bits, leaving an OR with the value 1 to set the first bit. Whereas the following line:

 DDRA |= PA0;

Is doing an OR with zero so won't change the registers at all.

Why do they do this? Likely because everyone else they ask for help or learned from did it that way. And because the standard defines are weirdly done.

Shifting by a number, typically a decimal number, will move that value over by that many binary positions. 1 << PA0 will shift 1 by PA0 to the left. Since PA0 is 0, there is no shift. But given 1 << 6 1 will become 0b1000000. Given13 << 6, it will shift 13, in binary which is 0b1101, over by 6 to become 0b1101000000 or 832.

Now, we need to see what PA0 - PA7 are defined as. These are typically defined in the specific header for your specific microcontroller, included via io.h or portpins.h

#define     PA7   7
#define     PA6   6
#define     PA1   1
#define     PA0   0

They are defined as their numerical position, in decimal!

They cannot be directly assigned, as bits, because they are not single bits.

If you were to do PORTA |= PA7; assuming PORTA is 0b00000000 (all off), you will get:

PORTA = PORTA | PA7; or PORTA = 0 | 7; or PORTA = 0 | 0b111

See the problem? You just turned on PA0, PA1, PA2, instead of PA7.

But PORTA |= (1 << PA7); works as you expect.

PORTA = PORTA | (1 << PA7); or PORTA = 0 | (1 << 7); or PORTA = 0 | 0b10000000;

The Smarter way

The other, better microcontroller, the MSP430, has a standard define of bits as:

#define BIT0                (0x0001)
#define BIT1                (0x0002)
#define BIT6                (0x0040)
#define BIT7                (0x0080)

These are define as their binary position, in hex. BIT0 is 0b0001, not like PA0, which is 0. BIT7 is 0b10000000, not like PA7, which is 0b111.

So direct assignments like P1OUT |= BIT7; will work the same as P1OUT |= (1 << 7); would.

Your question has already been answered, but I want to present an alternative that was a bit much for a comment. One of the first things I do when I start an embedded project is define my bit set and clear macros.

#define bitset(var,bitno) ((var) |= 1 << (bitno))
#define bitclr(var,bitno) ((var) &= ~(1 << (bitno)))

Using the macros, your code becomes:


The end result is a bit set instruction in assembly.