Why do we need codecs?
Transmitting an audio stream (such as you talking) over a digital medium requires the conversion of an analog signal (your voice in the air) to a digital signal (ultimately your voice encoded in 1's and 0's). Codecs are used for both the encoding and the decoding of the signal. Therefore, if the caller uses a given codec, the recipient also needs to have it to decode the signal.
In addition to the transmission of the signal, a codec can also compress the data to reduce its size and therefore the bandwidth required for its transfer. Compression comes at the cost of quality. Two main types of codecs are distinguished. Compression codecs are called "lossy" while codecs that focus on preserving the quality of the datastream are "lossless."
There are numerous codecs which are each designed and optimized for specific uses. For instance, codecs used for the transmission of voice during telephone calls need to fit the medium's needs: a very low latency between source encoding and playback. Therefore, codecs used for regular phone calls are lossy codecs design to reduce the bandwidth needs ensuring minimal latency but reducing audio quality ("the telephone voice").
What are the codecs supported by CALLR?
We mainly use two codecs at CALLR. They are the most widespread in the telecommunications industry which allows for smooth interconnections.
- G711/PCM(Lossless): PCM is an audio codec offering a very high quality at the cost of high bandwidth requirement: it's an uncompressed codec. There are two slightly different version, the μ-law (PCMU) used in America and the A-law (PCMA) for the rest of the world.
- G729(Lossy): G729 is used to reduce the bandwidth needs of an audio signal. It heavily reduces the size of an audio stream while preserving a sufficient quality for phone calls.
While bandwidth usage is not a big issue while originating a low amount of phone calls, compression becomes essential at scale (Call Centers).