To give you an idea of how long ago I bought this book, it's got a Galloway and Porter price label on it. They shut down years ago (boo). It's something of a Galloway and Porter special - a cheap, somewhat weird book. It feels rather like someone's monograph rather than a "proper" text, but that's good enough for my purposes.

What's my "purpose" here? It's simply to get a handle on the ideas behind modern coding theory. I'd read a few web pages, but the ideas didn't really click for me.

The start of the book is about fairly basic codes. It turns out that basic codes are really nothing more than linear subspaces of {0,1}^n. So, there are descriptions of Hamming, Golay, Reed-Muller, BCH, Reed-Soloman codes, etc., but I just skipped the maths!

Convolution codes, with maximum likelihood and Viterbi are introduced. I finally know what trelisses are about! The idea of convolution codes is really neat. From there, ways of combining codes are discussed.

This seems to be the clever part of modern codes - finding a way of encoding a large chunk of data so that we can sensibly find the errors among them, without constructing a horribly complex code. For example, if you have a square of bits, and put parity checks along the rows and columns, a 1-bit error will show as something being wrong in one row and one column, and you can infer the particular bit that's wrong. Turbo codes combine separate codes so that information can be shared between them to correct the data. Low-Density Parity Checks (LDPC) apply several parity checks over the data in order to identify corruption (like the square example above).

The book also covers soft-decision coding (e.g. where normally-distributed noise is applied to your signal) and codes combined with digital modulation (PSK, QAM and all that).

Is the book any good? Not sure. It covers the topics. It has references. It doesn't make the subject particularly accessible. I learnt stuff by ignoring the maths. I bought a cheap copy, and it worked out ok for me.

*Posted 2015-10-24.*