How does UART know the difference between data bits and start/stop bits?

It is detecting the start bit. That's exactly the purpose of it. The idle line will look like this:

...1111111111111111111111111111111...

Once the receiver is seeing 0 after a long time of ones (or after a stop bit, as we will see shortly), it knows the transmission is started and starting to count bits. It knows that 8 bits (or as defined by configuration) after the start bit are data. The ninth one is the stop bit and should be 1. If it is not - framing error occurs and resynchronization is required.

After stop bit is received, it is starting to wait for the start bit again. And so on.

Theoretically there can be a problem in synchronization if the line looks like:

..1010101010101010101.... 

or similar, so in this case the receiver won't see where to start, but in this case it won't really matter, as the start position won't make any difference. But in order to avoid such a problems some protocols define 1.5 (one and half) bit length for the stop bit to make it unique. Or, in practice there are always some time delays between two packets of data, so the line is idle for long enough to allow the receiver to synchronize.


This sounds like a question coming from someone trying to emulate a UART receiver in software or an FPGA. For RS-232, they use the term mark and space. But these usually correspond, once digitized, into a '1' and '0', respectively.

UART receiver often divides up each bit time (must be known, a priori) into at least 4, but often 16 or more, sub-periods. It starts out (upon power up/reset) in a state where expecting the serial receiver line in a mark state. If the line is NOT in a mark at that moment, then it knows that it is in the middle of a transmission frame and must wait until it can synchronize. If the line is in a mark state, then it may or may not be in the middle of something and will have to wait and see. This is a problem with RS-232, if just plugging into another device while serial communications are happening or if part of a tap to monitor the asynch serial communications between two other players and have just been reset. To be absolutely sure, when coming out of reset anyway, the UART would need to observe at least N bit times (where N is the number of bits per word, and often 7 or 8, and assuming no parity option here) worth of mark followed by one bit time of space to re-synchronize up (or else N+1 bit times of space.) Many don't carry that much state around, so they can synchronize up incorrectly, if started in the middle of a stream. You will often get framing errors and occasional data bytes, until it happens to accidentally re-synchronize up correctly again. That's often been an acceptable price, too. Normally, cables are connected and devices are power up in a particular order of operation so that there's rarely any issues.

Once synchronized, though, the UART knows what to expect. It always starts with the receiver line going from a mark to a space, the needed start bit that goes for a full bit time, followed by data bits and then followed by at least one bit time worth of mark (or longer) as the stop bit. If it stays synchronized, then it will see that pattern repeated over and over again.

Part of the reason for dicing up the bit times, to something like 4X or 16X, is that the clocks used by the transmitter and the receiver aren't necessarily perfectly accurate and they are otherwise simply asynchronous to each other. So part of the synchronization that goes on in the receiver is in lining up its diced up periods to the transmitter's timing. The finer-grained that is done, the better aligned the receiver can become.

Tags:

Uart

Serial