Why we need start/stop bit for asynchronous transmission

If you didn't have a zero start bit that kicked-off the timing to the receiver, how would you know what to do when a serial byte comes along with a leading 1 digit in the data stream? What happens if the next bit is also 1 and the bit after that - what if all the bits are 1? Then you'd miss the entire byte because nothing would change (due to not having a start bit of 0).

Regular 8 high bits with leading zero start bit: -

enter image description here

Missing start bit: -

enter image description here


Another consideration, in addition to the accepted answer, is timing. You will have drift between the sender and receiver clock, so the receiver has to "recover" the clock in one way or another. There are fancy schemes like 10/8B which use idle signals to do this, but the simple approach in UARTs is to have a "start bit," whose presence signals the rising edge of the signal. This allows the receiver to resynchronize their clock to receive the upcoming byte.

These start and stop patterns have to occur often enough to ensure alignment does not drift too far during receiving. This leads to start and stop bits appearing every byte. It's a simple rule, even if it falls short of being "ideal" in other ways.


I don't quite get the idea, why do we need a start/stop bit? Isn't that a byte consists of 8 bits, so the receiver just needs to count how many bits it has received so far, if the number is 8, it has one byte and repeats the process. So why do we need start/stop bit?

The receiver can't count the bits it's receiving, because the receiver doesn't know whether or not it's receiving bits!

Let's imagine that the sender and the receiver are communicating using sound, and let's imagine that a 0 is represented by one second of silence and a 1 is represented by one second of sound. In your book, the "idle state"—what the sender sends when it doesn't have any actual data to send—is 1, meaning sound.

Now suppose that you're the receiver, and the sender is not using a start bit. You hear eight seconds of continuous sound. Did you just hear the byte "11111111", or is the sender just idling? You have no way of knowing, because it all sounds the same to you.

Alternatively, suppose that you hear one second of silence, then six seconds of sound, then one second of silence. Did you just hear the byte "01111110"? Or was it, perhaps, the byte "11110111" followed by the byte "11101111"? Again, you have no way of knowing.

This is where the start bit comes in. Whenever the sender wants to send a byte, first it sends a 0 (one second of silence), then it sends the byte of data.

Now your job as a receiver is a lot easier! If you hear nine seconds of sound, you know that the sender is merely idling. If, on the other hand, you hear one second of silence followed by eight seconds of sound, you know that the sender just sent the byte "11111111".

Of course, most machine communication systems don't use sound; they use electricity instead. But electric signals work just like sound. The receiver is always going to receive something, regardless of whether we want it to or not. So we need to give the receiver some way of knowing whether it's receiving real data or just idle noise.

To address this specific question from your comment:

just one question, if we don't have "idle" value, so when there is no data to send, the receiver won't receive anything, so it can count every 8 bits as a byte without needing stop/start bit?

It's physically impossible not to have an idle value. If you've got an electric cable, then it's possible to send a positive voltage, or a negative voltage, or a voltage of 0, but it's physically impossible to not send any voltage. That means that the receiver is always going to receive some voltage, no matter what we do. So we have to give the receiver some way to know whether the voltage that it's receiving is meaningful or not.

Tags:

Serial