12-Bit versus 16-Bit A/Ds: Which one do I use?
Measurement errors can result from the resolution of the A/D that is used. Under conditions set forth in the manufacturer specifications, these errors should be within certain specified limits. Most accuracy specifications always require two components. One component is a portion of the measured value (gain error) and the other component is an offset value. The offset value may be critical when choosing either a 12-bit or 16-bit A/D. For example, suppose a 12-bit analog input board has an accuracy on a 2 volt range of +/-(0.01% reading + 1 LSB). For a 0.5 volt input, the uncertainty is +/-(.000050 + 2.0/4096) or +/-(.000050 + .000488) which equals +/-538 microvolts. One should note that 1 LSB offset error is equal to range/4096 = .024% of range. Thus, on a two volt range, 1 LSB offset error is 0.49 millivolts. This offset error contributes a significant portion of the error when measuring 0.5 volts. Now suppose that we use a 16-bit analog input board that has the same accuracy of +/-(0.01% reading + 1 LSB) on the 2 volt range. For a 0.5 volt input the uncertainty is +/-(.000050 + 2.0/65536) or +/-(.000050 + .0000305) which equals +/-80.5 microvolts. Here, 1 LSB offset error is equal to range/65536 = .0015% of range. Note that for the same voltage input the 16-bit board has much higher accuracy than a 12-bit board, due to much smaller offsets. This example also illustrates the correlation between resolution and accuracy. Thus, when choosing the A/D, you need to take into consideration the input signals, the range you will use and the desired accuracy.
Diese FAQs beziehen sich auf:
Alle FAQs ansehen »