# Concatenated codes

 Dave Forney (2009), Scholarpedia, 4(2):8374. doi:10.4249/scholarpedia.8374 revision #91155 [link to/cite this article]
(Redirected from Concatenated Coding)
Jump to: navigation, search
Post-publication activity

Curator: Dave Forney

Concatenated codes are error-correcting codes that are constructed from two or more simpler codes in order to achieve good performance with reasonable complexity. Originally introduced by Forney in 1965 to address a theoretical issue, they became widely used in space communications in the 1970s. Turbo codes and other modern capacity- approaching codes may be regarded as elaborations of this approach.

## Origins

The field of channel coding is concerned with sending a stream of data at as high a rate as possible over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. For an overview of the history of channel coding, see (Costello and Forney, 2007).

Shannon's channel coding theorem (Shannon, 1948) shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates $$R$$ less than a certain threshold $$C\ ,$$ called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length $$N$$ of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with $$N\ ,$$ so such an optimum decoder rapidly becomes infeasible.

In his doctoral thesis, Forney (Forney, 1966) showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length $$N\ .$$

The basic concatenated coding scheme considered by Forney is shown in Figure 1. The inner code is a short block code like that envisioned by Shannon, with rate $$r$$ close to $$C\ ,$$ block length $$n\ ,$$ and therefore $$2^{nr}$$ codewords. The inner decoder decodes optimally, so its complexity increases exponentially with $$n\ ;$$ for large enough $$n$$ it achieves a moderately low decoding error probability.

The outer code is an algebraic Reed-Solomon (RS) code (Reed and Solomon, 1960) of length $$2^{nr}$$ over the finite field with $$2^{nr}$$ elements, each element corresponding to an inner codeword. (The overall block length of the concatenated code is therefore $$N = n2^{nr}\ ,$$ which is exponential in $$n\ ,$$ so the complexity of the inner decoder is only linear in $$N\ .$$) The outer decoder uses an algebraic error-correction algorithm whose complexity is only polynomial in the RS code length $$2^{nr}\ ;$$ it can drive the ultimate probability of decoding error as low as desired.

While concatenated codes showed that the performance-complexity tradeoff problem of channel coding could be solved in principle, they were hardly practical in the technology of the 1960s. The author recalls much eye-rolling when he presented concatenated codes to a Bell Labs research group in 1965, and discussed code lengths up into the thousands.

## Space applications

However, by the 1970s, technology had advanced sufficiently that concatenated codes became standardized by NASA for space applications.

The NASA standard concatenated coding system is shown in Figure 2.

In comparison to Figure 1, the NASA system had two significant improvements. Rather than a block code, the NASA standard used a short-constraint-length (64-state) convolutional code as an inner code, decoded by the optimal Viterbi algorithm (see the article on Viterbi algorithm" in Scholarpedia), because by this time it had been realized that convolutional codes are superior to block codes from the point of view of performance vs.  complexity. Secondly, the NASA standard incorporated an interleaver to spread out bursts of errors, because the errors out of a Viterbi decoder are somewhat bursty, and also because real space channels can suffer other kinds of burst errors. The outer code was chosen to be a powerful 16-error-correcting Reed-Solomon code of length 255 over the finite field with 256 elements.

The NASA standard concatenated code achieved an impressive coding gain of more than 7 dB on an additive white Gaussian noise channel at decoding error probabilities of the order of $$10^{-5}\ ,$$ and became a workhorse for space applications during the 1970s and 1980s.

When the primary antenna failed to deploy on the Galileo mission to Jupiter in 1992, heroic engineering efforts were undertaken to design the most powerful concatenated code conceived up to that time, and to program it into the spacecraft computers. The inner code was a $$2^{14}$$-state convolutional code, decoded by the Viterbi algorithm. The outer code actually consisted of multiple Reed-Solomon codes of varying strengths. Iterative decoding was used as follows: if one of the outer decoders succeeded in decoding on the basis of partial decoding by the inner decoder, then the outer decoder's decisions were fed back to the inner decoder to help it make further progress, and so forth. This elaborate system achieved a coding gain of more than 10 dB at decoding error probabilities of the order of $$10^{-7}\ .$$

## Capacity-approaching codes

The field of channel coding was revolutionized by the invention of turbo codes by Berrou et al. in 1993 (Berrou et al. 1993) . Turbo codes use multiple carefully chosen codes, a pseudo-random interleaver, and iterative decoding to approach the Shannon limit within 1 dB (see the article on Turbo codes" in Scholarpedia).

The original turbo codes of Berrou et al. are regarded as parallel concatenated codes," whereas the systems described above are now called serial concatenated codes." With iterative decoding, concatenation of even very simple codes can yield superb performance. For example, Figure 3 illustrates a repeat-accumulate" (RA) code of Divsalar, Jin, and McEliece (Divsalar et al. 1998), which serially concatenates a trivial repetition code of length 3 (i.e., the code with the two codewords 000 and 111) with an ultra-simple 2-state, rate-1 convolutional accumulate" code via an interleaver. Compared to the elaborate Galileo system described above, this simple RA system is much easier to decode, and, quite amazingly, performs better!

All capacity-approaching codes are now regarded as codes on graphs," in which a (possibly large) number of simple codes are interconnected according to some graph topology. Any such code may be regarded as a (possibly elaborate) concatenated code.