## Does this make sense?

## Explaining mathy concepts one step at a time.

## Using least-squares for channel estimation pt 2: Python implementation

So I talked about using least-squares to estimate a channel at the receiver of a communication system, given some known training sequence. Now I have code to put it into practice. The code will be posted below, but I’ll start with an explanation of what I have done first up here before delving into the details after the code.

So the code can be broken up as follows. First, I generate a channel of some random values. I generate some Rayleigh fading values using a Gaussian number generator (by the way, I make no claims as to how realistic or unrealistic this channel is, I’m just using it to prove that the estimation deal works. Realistic channels or even existing models for channels can be discussed some other time).

Next, I generate a pseudonoise training sequence of +1 and -1 values that I would send through my communication system. Realistically, I would use a Gold sequence, or a Barker sequence, some special binary sequence that guarantees good properties for synchronization purposes, but I just randomly generate a binary sequence here.

I apply the communication model we discussed before, where the received signal is , where h is our discretized channel, x is the sent signal, and v is additive white Gaussian noise.

Now the fun begins. I estimate the channel using least squares exactly as I had modeled in my previous post, creating a Toeplitz matrix using known training values and forming a “b” vector out of actually received values. I use this estimated channel to find an equalizer that best cancels it out by forming a Toeplitz matrix using the estimated channel, and having a Kronecker delta vector for the “b” vector. Finally, I form a Toeplitz matrix out of received values, form a “b” vector out of known training values, and solve for an equalizer directly. Two steps is definitely not better than one when there’s noise to be accounted for.

So here’s the code.

Let’s talk about some of the nuances of why any of this works. All three of the least squares problems are developed such that every problem is overdetermined. I’m going to say again, do NOT do it with an underdetermined system, and if you have a square A matrix, this probably isn’t going to work our well for you. Basically, choose a training sequence that is long enough, and an equalizer that is at least as long as the channel, and shorter than the training sequence by some amount, and you’re probably golden. Realistically, this probably won’t be a problem if you use a preset sequence for channel estimation, like concatenated Barker codes or something.

There are some places where I have if statements for appending zeros or shortening vectors before forming the Toeplitz matrix. Frankly, the matrix created needs to fit the parameters you have on hand, and there may be times where you need to fill in zeros because there’s nothing left in your convolution to express in the linear system.

The number of sequences I choose to delay here is the recommended rule-of-thumb by Dr. John Cioffi at Stanford, like I discussed before. If you change the value to be one more or one less, I’m pretty sure you can get some results that give you a less-than-perfect Kronecker delta for the final plot of channel convolved with equalizer.

Solving for an equalizer in two steps makes the problem easier to visualize, but in general will obtain worse (maybe not by much) results than the direct equalizer solve. The noise effects tend to get worse using two operations.

An example set of plots generated is given below. Remember that the channel and stuff shown here are just magnitudes, so don’t try to visually convolve them, that’s just going to make things confusing.

## Nonstationary Channel Estimation Using Recursive Least Squares

This example uses:

This example shows how to track the time-varying weights of a nonstationary channel using the Recursive Least Squares (RLS) algorithm.

The channel is modeled using a time-varying fifth-order FIR filter. The RLS filter and the unknown, nonstationary channel process the same input signal. The output of the channel with noise added is the desired signal. From this signal the RLS filter attempts to estimate the FIR coefficients that describe the channel. All that is known *a priori* is the FIR length.

When you run the model, a plot is made of each weight over time, with the «true» filter weights drawn in yellow, and the estimates of those weights in magenta. Each of the five weights is plotted on a separate axis.

### Exploring the Example

RLS is an efficient, recursive algorithm that converges to a good estimate of the FIR coefficients of the channel if the algorithm is properly initialized. Experiment with the value of the tunable Forgetting factor parameter in the RLS Filter block. A good initial guess is *(2N-1)/2N* where *N* is the number of weights. The Forgetting factor is used to indicate how fast the algorithm «forgets» previous samples. A value of 1 specifies an infinite memory. Smaller values allow the algorithm to track changes in the weights faster. However, a value that is too small will cause the estimates to be overly influenced by the channel noise.

### References

For more information on the Recursive Least Squares algorithm, see S. Haykin, Adaptive Filter Theory, 3rd Ed., Prentice Hall, 1996.

#### Open Example

A modified version of this example exists on your system. Do you want to open this version instead?