Most real-world signals are analog in nature. To use them in digital devices they must be converted into digital data first. This is a process known as analog-to-digital conversion or ADC.

ADCs are circuits that sample the amplitude of an analog signal at regular intervals and map this sample to digital words consisting of binary 0s and 1s. This is how information gets from the physical world to your mobile phone.

### Sampling

During the A/D conversion process an analogue signal is progressively sampled and turned into a series of digital values. Sampling takes place at discrete points in time, the distance between each point being referred to as the sampling interval. The resulting digital data is non-continuous both in time and amplitude. The maximum and minimum amplitudes of the sampled data must be within certain limits, which is why sampling is so important in many practical applications.

For example, the temperature of your room or the light entering a camera are examples of analogue signals that undergo A/D conversion to become digital data that your computer can process and see. A/D conversion is also used to turn the electrical activity in your brain into the digital information that a computer can read.

Analog to digital conversion is a fundamental process that is applied in countless practical applications. A typical telephone modem, for example, uses A/D conversion to convert the incoming analog audio signals on a twisted pair of wires into digital information that can be analyzed and processed by a microprocessor.

The fundamentals behind A/D conversion are grounded in the laws of physics and mathematical principles, particularly the Nyquist-Shannon sampling theorem. The theory states that in order to faithfully reproduce a bandlimited analog signal the sample rate must be at least twice the highest frequency component of the signal.

There are a number of different ways to achieve A/D conversion, the one most commonly used is a process called Pulse Code Modulation (or PCM for short). This technique is based on the idea that samples are taken at regular intervals of time and then converted into a sequence of binary values.

There are two main ways to do this, either by externally clocked or controlled by the processor itself. When the converter is externally clocked, the A/D converter will be interrupted at each sampled value to give the processor enough time to read and act on it before the next digital value is available. When the converter is controlled by the processor, the processor itself determines when to interrupt the A/D conversion.

### Quantization

The conversion of continuous analog signal values into a set of finite, digitally represented integers is known as sampling and quantization. This process is the first step in the analog-to-digital (A-to-D) conversion process.

In the sampled and digitized form, the physical quantities can be represented by an arbitrary number of bits. This allows for an unlimited amount of information to be stored in a very small space, which is why digital systems are so much more efficient than their analog counterparts. However, the tradeoff is that the accuracy of the digital representation is limited by the precision of the smallest integer that can represent a given value. This limit is called the span or resolution of a system.

A digital-to-analog converter samples an analog signal at a fixed rate and then performs the quantization process to reduce the range of possible values into a smaller, countable set of values. This smaller set of values is known as a quantization level. Each quantized level is assigned a unique binary number. The smallest bit that can represent a particular signal value is defined as the least significant bit (LSB).

For example, an 8-bit ADC will allow for only 256 different digital values to be represented by its LSB. The actual analog signal that is sampled may vary between zero and positive one volts, so the least significant bit would represent only 3.9 mV of the variation.

The quantized signal can be approximated by a function that maps each LSB index to a corresponding value on the original sampled waveform. This is a simple approximation that works well for most applications. However, entropy coding techniques can be used to better manage the distortion that is produced by this approach.

Another way to look at the quantization process is to consider it as a rounding operation that occurs during digital conversion. This “rounding” results in a loss of accuracy that is commonly referred to as quantization noise. Generally, the more bits an ADC has available, the lower the quantization noise will be. However, it is also important to note that this approach limits the ADC’s sampled range.

### Processing

The analog-to-digital conversion process (sampling, quantization, and encoding) transforms continuous time or amplitude signals into discrete data points, represented as numbers. Typically, the Nyquist-Shannon sampling theorem stipulates that no part of the signal can be reproduced accurately unless it is sampled at least twice as frequently as the highest frequency component.

The resulting digital data is usually encoded in binary code, which represents a value of either 0 or 1 based on the state of the bit. The accuracy of the digitized signal depends on the precision of the ADC and the quality of the analogue source, and is a factor to consider when trying out methods to transfer vhs to digital.

Most physical sensor data starts out in an analog form, such as the temperature you see on a thermometer or the light entering your camera lens. This sensor data undergoes an analogue-to-digital conversion process within the device before it is processed into digital data that can be used by control systems.

A wide range of modern devices rely on the conversion process from analogue to digital. These include audio equipment, televisions, and computers. For example, a telephone modem uses ADC to convert the incoming analog signals over a twisted-pair line into digital data that can be processed by a microprocessor.

Other common applications for ADC include capacitive sensing, digital calipers, and time-to-digital converters. Capacitive sensors can convert the analog physical quantity of a change in the distance between two sliding rulers into a digital representation of how much they were moved. Digital calipers can represent a physical quantity, such as the amount of water in a tank or the permittivity of a dielectric material, by using an ADC to convert an analog voltage into digital data.

In addition to providing the basis for many computer graphics applications, Processing encourages a style of coding that is easy for beginners to understand and use. This has resulted in a wide range of notable projects from artists in a variety of disciplines, to educational tools and creative coding frameworks. This approach to coding has also been used by scientists and engineers for data visualization projects.

### Output

The digital output of the conversion process is a sequence of 0s and 1s. This digital representation of an analog signal can then be processed by microprocessors and other embedded systems that understand only a binary system. Consequently, the digitization of analog signals is one of the key components of any control system.

The circuits associated with the process of digitizing an analog signal are referred to as analog-to-digital converters or ADCs. The ADC converts a continuous, or analog, voltage signal into a series of multilevel, or digital, values. These values are a result of the sampling and quantization processes. The analog input to the ADC consists of a voltage that varies among a theoretically infinite number of values, such as sine waves, the waveforms of human speech or the signals produced by a television camera.

In order to ensure that the ADC converts a signal accurately, it must sample the voltage continuously over time to measure each of its levels. This process is known as the sampling technique or the Nyquist theorem. The ADC must also ensure that each sampled value corresponds to a discrete value in the resulting digital output. This is accomplished by using a clock to control the timing of each sampling interval.

As discussed in the section on ADCs, there are several ways to do this, depending on the required speed and resolution of the digital output. These ADCs can either run continuously and interrupt the processor at each digital output, or they can be externally clocked to run at a specific rate. The latter method is usually used in high-speed applications where accuracy is less of a concern.

A significant disadvantage of the use of analog-to-digital conversion is that noise may be introduced to the signal, resulting in erroneous output values. This may be due to internal nonlinearities in the ADC, as well as input-referred noise from sources outside the ADC. Noise-induced errors are a common cause of error in data acquisition systems.