Cascading ADC's to get higher resolution

Splitting your input range will get you 13 bit, not 24. Suppose you have an input range from -4.096V to +4.096V. Then a 12-bit ADC will have a 2mV resolution: 2\$^{12}\$ x 2mV = 8.192V (the range from -4.096V to +4.096V). If you take the positive half you get 1mV resolution there because your range is halved: 2\$^{12}\$ x 1mV = 4.096V. That's 2\$^{12}\$ levels above 0V, and another 2\$^{12}\$ below. Together 2\$^{12}\$ + 2\$^{12}\$ = 2\$^{13}\$, so that 1 bit extra, not 12.

About changing the reference voltage. I'll give a different example. Suppose you have a 1 bit ADC and want to get 12 bits by changing the reference. One bit will give you a 1 if the input is greater than \$\frac{V_{REF}}{2}\$, and a zero otherwise. Suppose your reference is 1V, then the threshold is 0.5V. If you change your reference to 0.9V you'll have a new threshold at 0.45V, so you're already able to discern 3 different levels. Hey, this may work, I can do 12-bits with a 1-bit ADC, and then probably also 24-bit with a 12-bit ADC!
Hold it! Not so fast! You can do this, but the components of your 1-bit ADC have to be 12-bit grade. That's for the precision of the reference, and the comparator. Likewise, a 12-bit ADC would be able to do 24-bit if the precision of the ADC is precise enough, and that the precision of the varying reference voltage is 24-bit grade. So in practice you don't gain much.

There's no such thing as a free lunch.

edit
There seems to be a misunderstanding about oversampling, and the fact that there are 1-bit audio ADCs which can give you 16-bit resolution.
If your input is a fixed DC level, say 0.2V in a 1V input range, your output will always be the same as well. With a 1-bit ADC this will be zero for our example (the level is less than half the reference). Now that will be so, whether you sample at 1 sample per second, or 1000. So averaging doesn't change this. Why does it work with the audio ADC?, because the voltage varies all the time (noise), which, according to Einstein (relativity, you know ;-)) is the same as keeping the voltage constant and varying the reference. And then you get several different readings while oversampling, which you can average to get a quite good approximation of your actual level.
The noise has to be strong enough to pass the ADC's threshold(s), and has to fit certain constraints, like Gaussian distribution (white noise). In the 1-bit example it didn't work because the noise level is too low.


Further reading:
Atmel application note AVR121: Enhancing ADC resolution by oversampling


Lots of things in your question. So lets take them one by one.

Suppose I have a pair of 12-bit ADC's, I can imagine that they can be cascaded to get <= 24-bit output. I can think of simply using one for the positive range and the other for the negative range, though there probably will be some distortion in the cross-over region. (suppose we can ignore are a few error bits or, perhaps, place a 3rd ADC to measure the value around 0 volts).

Not really - you would get 13-bit resolution. One can describe operation of 12 bit converter as deciding in which of the 4096 bins (2^12) inputs voltage is. Two 12bits ADCs would give you 8192 bins or 13-bit resolution.

Another option I had been thinking of is using a single hi-speed ADC and switching the reference voltages to get a higher resolution at lower speed.

Actually this is how Successive Approximation Converter work. Basically one-bit converter (aka comparator) is used with digital to analog converter that is producing varying reference voltage according to successive approximation algorithm to obtain digitized sample of the voltage. Note that SAR converters are very popular and most of ADCs in uC are SAR type.

Also there should be a way of getting a real-valued result with using one fixed-ref ADC and then switching the aref's of the secondary converter to get more precise value in between.

Actually it is awfully similar to how pipeline ADCs are working. However instead of changing reference of the secondary ADC the residue error left after first stage is amplified and processed by next stage ADC.

Any comments and suggestions are welcome. I am presuming that a quad 8-bit (or dual 12-bit) chip is less expensive then a single 24-bit chip.

Actually there is a reason for that as having 24bit converter is not as simple as arranging in some configuration four 8-bit converters. There is much more to that. I think that key misunderstanding here is thinking that one can just "add" number of bits. To see why this is wrong it is better to think of ADC as circuit that is deciding to which "bin" input voltage belong. Number of bins is equal to 2^(number of bits). So 8 bit converter will have 256 bins (2^8). The 24 bit converter will have over 16 millions of bins (2^24). So in order to have the same number of bins as in 24-bit converter one would need over 65 thousands 8-bit converters (actually 2^16).

To continue with the bin analogy - suppose that your ADC have full scale of 1V. Then the 8-bit converter "bin" is 1V/256 = ~3.9mV. In case of the 24-bit converter it would be 1V/(2^24) = ~59.6nV. Intuitively it is clear that "deciding" if the voltage belongs to smaller bin is harder. Indeed this is the case due to noise and various circuit nonidealities. So not only one would need over 65 thousands of 8 bit converters to get 24 bit resolution but also those 8bit converters would have to be able to resolve to 24-bit sized bin (your regular 8-bit converter would not be good enough as it is able to resolve to ~3.9mV bin not 59.6nV bin)


Yes, in theory you can do what you want, but only if you have some wholly unrealistic equipment available to you.

The several other comments made so far about limited extra accuracy are correct, alas.

Consider. Measure a voltage with a 12 bit ADC and get say 111111000010 You know that the real value lies somewhere in a 1 bit range +/- 0.5 bits either side of this value.

IF your ADC was accurate to 24 bits but was providing only 12 bits then it is reporting that the vaklue lies within +/- half a bit of 111111000010 000000000000. If this was the case you could take a 12 bit ADC with a +/- 1/2 bit range, centre it on 111111000010000000000000 and read the result. This would give you the difference bwteen the actual signal and the aDC value, as desired. QED.

However the 12 bit ADC is itself only accurate to about half a bit. The sum total of its various errors cause it to declare a certain result when the real result is up to about half a but different plus or minus.

While you would like

111111000010 to mean 111111000010 000000000000

it may actually mean 111111000010 000101101010 or whatever.

SO if you then take a 2nd ADC and measure the lower 12 bits and ASSUME that they are relative to an exact 12 bit boundary, they are actually relative to the above erroneous value. As this value is essentially random error, you would be adding you new 12 lower bits figure to 12 bits of essentially random noise. Precise + random = new random.

EXAMPLE

Use two conveters that can measure a range and give a result in 1 of 10 steps. If scaled to 100 volt FS they give ge 0 10 20 30 40 50 60 70 80 90

If scaled to 10 volt full scal they give 1 2 3 4 5 6 7 8 9

You decide to use these two converters to meaure a 100 volt range with 1 volt accuracy.

Converter 1 returns 70V. You then measure the voltage relative to 70V and get -3V. So you conslude that the real value ie +70V - 3V = 67V.

HOWEVER the 70V result could in fact be any of 65 66 67 68 69 70 71 72 73 74

Only if the 1st converter is ACCURATE to 1V in 100, even though it displays 10V steps in 100V, can you achieve what you want.

So you real result is 67V +/- 5 volt = anything from 62V to 72V. So you are no better off than before. Your centre has moved but it may be located randomly.

You will be able to get modest improvement this way as a converter is usually probably slightly more accurate than the bits it returns (you hope) so your 2nd converter makes some use of this.


A system that does in fact work has been mentioned with one important omission. If you sample a signal N times and you add +/_ half a bit of gaussian noise you will spread the signal "all over the possible range" and the average value will now be log(N) more accurate than before. This scheme has fishhooks and qualifications and you cannot just get an arbitrary extra number of bits, but it does offer some improvement.


In the first case above I mentioned a12 bit ADC with 24 bit accuracy. You can achieve something of th sort by using a 12 bit ADC and reading its assumed value with a 24 bit eg delta sigma converter. IF the signal was stable enough that it remained in the same one bit range you can use a 2nd ADC to read the 2nd 12 bits wrt this stable signal.

Alternative - just read 24 bit signal initially with sigma delta, lock in that point and then mesure successively relative to it with the 2nd ADC.As long as the signal stays within range of the 2nd ADC you'll get a much faster result.

Tags:

Conversion

Adc