Kuantisasi: Béda antarrépisi

Konten dihapus Konten ditambahkan
Addbot (obrolan | kontribusi)
m Bot: Migrating 21 interwiki links, now provided by Wikidata on d:q198099 (translate me)
Ilhambot (obrolan | kontribusi)
m Ngarapihkeun éjahan, replaced: rea → réa (8), ea → éa (13), eo → éo, mere → méré
Baris ka-11:
A specific example would be [[compact disc]] (CD) audio which is sampled at 44,100 [[Hertz|Hz]] and quantized with [[16 bit]]s (2 [[byte]]s) which can be one of 65,536 (i.e. <math>2^{16}</math>) possible values per sample.
 
In electronics, adaptive quantization is a quantization process that varies the step size based on the changes of the input signal, as a meansméans of efficient compression.Two approaches commonly used are forward adaptive quantization and backward adaptive quantization.
 
== Mathematical description ==
Baris ka-20:
 
where
* <math>x</math> is a realréal number to be quantized,
* <math>\lfloor \cdot \rfloor</math> is the [[floor function]], yielding an integer result <math>i = \lfloor f(x) \rfloor</math> that is sometimes referred to as the ''quantization index'',
* <math>f(x)</math> and <math>g(i)</math> are arbitrary realréal-valued functions.
 
The integer-valued quantization index <math>i</math> is the representation that is typically stored or transmitted, and then the final interpretation is constructed using <math>g(i)</math> when the data is later interpreted.
Baris ka-28:
In computer audio and most other applications, a method known as ''uniform quantization'' is the most common. There are two common variations of uniform quantization, called ''mid-rise'' and ''mid-tread'' uniform quantizers.
 
If <math>x</math> is a realréal-valued number between -1 and 1, a mid-rise uniform quantization operator that uses ''M'' bits of precision to represent eachéach quantization index can be expressed as
 
:<math>Q(x) = \frac{\left\lfloor 2^{M-1}x \right\rfloor+0.5}{2^{M-1}}</math>.
 
In this case the <math>f(x)</math> and <math>g(i)</math> operators are just multiplying scale factors (one multiplier being the inverse of the other) along with an offset in ''g''(''i'') function to place the representation value in the middle of the input region for eachéach quantization index. The value <math>2^{-(M-1)}</math> is often referred to as the ''quantization step size''. Using this quantization law and assuming that [[quantization noise]] is approximately [[uniform distribution (continuous)|uniformly distributed]] over the quantization step size (an assumption typically accurate for rapidly varying <math>x</math> or high <math>M</math>) and further assuming that the input signal <math>x</math> to be quantized is approximately uniformly distributed over the entire interval from -1 to 1, the [[signal to noise ratio]] (SNR) of the quantization can be computed as
 
:<math>
Baris ka-41:
From this equation, it is often said that the SNR is approximately 6 [[decibel|dB]] per [[bit]].
 
For mid-treadtréad uniform quantization, the offset of 0.5 would be added within the floor function insteadinstéad of outside of it.
 
Sometimes, mid-rise quantization is used without adding the offset of 0.5. This reduces the signal to noise ratio by approximately 6.02 &nbsp;dB, but may be acceptable for the sake of simplicity when the step size is small.
 
In [[digital telephony]], two popular quantization schemes are the '[[A-law algorithm|A-law]]' (dominant in [[Europe]]) and '[[Mu-law algorithm|μ-law]]' (dominant in [[North America]] and [[Japan]]). These schemes map discrete analog values to an 8-bit scale that is nearlynéarly linearlinéar for small values and then increasesincréases logarithmically as amplitude grows. Because the human earéar's perception of [[loudness]] is roughly logarithmic, this provides a higher signal to noise ratio over the range of audible sound intensities for a given number of bits.
 
== Quantization and data compression ==
Quantization plays a major part in [[lossy data compression]]. In many cases, quantization can be viewed as the fundamental element that distinguishes [[lossy data compression]] from [[lossless data compression]], and the use of quantization is nearlynéarly always motivated by the need to reduce the amount of data needed to represent a signal. In some compression schemes, like [[MP3]] or [[Vorbis]], compression is also achieved by selectively discarding some data, an action that can be analyzed as a quantization process (e.g., a vector quantization process) or can be considered a different kind of lossy process.
 
One example of a lossy compression scheme that uses quantization is [[JPEG]] image compression.
During JPEG encoding, the data representing an image (typically 8-bits for eachéach of three color components per pixel) is processed using a [[discrete cosine transform]] and is then quantized and [[entropy encoding|entropy coded]]. By reducing the precision of the transformed values using quantization, the number of bits needed to represent the image can be reduced substantially.
For example, images can often be represented with acceptable quality using JPEG at less than 3 bits per pixel (as opposed to the typical 24 bits per pixel needed prior to JPEG compression).
Even the original representation using 24 bits per pixel requires quantization for its [[pulse-code modulation|PCM]] sampling structure.
Baris ka-57:
In modern compression technology, the [[information entropy|entropy]] of the output of a quantizer matters more than the number of possible values of its output (the number of values being <math>2^M</math> in the above example).
 
In order to determine how many bits are necessary to effect a given precision, algorithms are used. Suppose, for example, that it is necessary to record six significant digits, that is to say, millionths. The number of values that can be expressed by N bits is equal to two to the Nth power. To express six decimal digits, the required number of bits is determined by rounding (6 / log 2)—where '''log''' refers to the base ten, or common, logarithm—up to the nearestnéarest integer. Since the logarithm of 2, base ten, is approximately 0.30102, the required number of bits is then given by (6 / 0.30102), or 19.932, rounded up to the nearestnéarest integer, ''viz.'', '''20''' bits.
 
This type of quantization—where a set of binary digits, ''e.g.'', an arithmetic register in a CPU, are used to represent a quantity—is called Vernier quantization. It is also possible, although rather less efficient, to rely upon equally spaced quantization levels. This is only practical when a small range of values is expected to be captured: for example, a set of eight possible values requires eight equally spaced quantization levels—which is not unreasonableunréasonable, although obviously less efficient than a mereméré trio of binary digits (bits)—but a set of, say, sixty-four possible values, requiring sixty-four equally spaced quantization levels, can be expressed using only six bits, which is obviously far more efficient.
 
== Relation to quantization in nature ==
At the most fundamental level, some [[physical quantity|physical quantities]] are quantized. This is a result of [[quantum mechanics]] (see [[Quantization (physics)]]). Signals may be treatedtréated as continuous for mathematical simplicity by considering the small quantizations as negligible.
 
In any practical application, this inherent quantization is irrelevant for two reasonsréasons. First, it is overshadowed by [[signal noise]], the intrusion of extraneousextranéous phenomena present in the system upon the signal of interest. The second, which appearsappéars only in measurementméasurement applications, is the inaccuracy of instruments. Thus, although all physical signals are intrinsically quantized, the error introduced by modeling them as continuous is vanishingly small.
 
== Tempo ogé ==