Sunday, February 19, 2012

A Bayesian Approach To Kalman Filtering

INTRODUCTION
Ripped straight from Wikipedia...
The Kalman filter is an algorithm, commonly used since the 1960s for improving vehicle navigation (among other applications, although aerospace is typical), that yields an optimized estimate of the system's state (e.g. position and velocity). The algorithm works recursively in real time on streams of noisy input observation data (typically, sensor measurements) and filters out errors using a least-squares curve-fit optimized with a mathematical prediction of the future state generated through a modeling of the system's physical characteristics.


TL;DR
Rockets


There are both linear and non-linear forms of the Kalman filter, with the non-linear forms being the Extended Kalman Filter (EKF), the invariant Extended Kalman Filter, and the Unscented Kalman Filter (UKF). The non-linear versions of this algorithm represent the current "gold standard" in many application areas, including GPS and navigation.

Before jumping in the deep end of the pool, I decided to implement a simple example that shows the ideas and implementation of Kalman filtering, using a recursive Bayesian approach.


TL;DR
Homework


WEIGHTING FUNCTION FOR KALMAN UPDATING
The Kalman filter is often derived from a matrix equation standpoint. The math, at least to me, is long, involved, and fairly nasty to solve without the help of some MATLAB matrix witchery. However, there is also a second, more "gut level" way to approach the Kalman filter - by approaching it as a case of recursive Bayesian filtering.

The derivation for the following equations can be found on pg. 47 of Bayesian Data Analysis, Second Edition, by Gelman, Carlin, Stern, and Rubin.

Basically, it is possible estimate the mean of a normal distribution by following a weighting equation for mean and variance, which can be represented as follows.

$\Large{\mu_1=\frac{\frac{1}{\tau_0^2}\mu_0+\frac{1}{\sigma^2}y}{\frac{1}{\tau_0^2}+\frac{1}{\sigma^2}}}$

$\Large{\frac{1}{\tau_1^2}=\frac{1}{\tau_0^2}+\frac{1}{\sigma^2}}$

Here, $\mu_1$ represents the new estimate for the mean and $\tau_1^2$ is the estimate for the new variance. $\mu_0$ represents the old mean, $\tau_0^2$ represents the old variance, $y$ is the new measured value, and $\sigma^2$ is the measurement variance. I will do an example later on to demonstrate this, but first, let's try to decide what this equation is really saying.

If the old value for variance, $\tau_0^2$, is very large, then $u_0$ is multiplied by a very small number, effectively making the equation...

$\Large{\mu_1 = \frac{\frac{1}{\sigma^2}y}{\frac{1}{\sigma^2}}}$

which can be further reduced to simply...

$\Large{\mu_1 = y}$

What this means, is that if we are not very confident ($\tau_0^2$ is large) of our old mean, $\mu_0$, our best guess for $\mu_1$ is simply the value we measured, $y$. Our new variance also reduces to

$\Large{\frac{1}{\tau_1^2}=\frac{1}{\sigma^2}}$

Conversely, if we are not very confident in the new result $y$, our new mean $\mu_1$ is basically the same as $\mu_0$. The new mean is based largely on the weighting between the new variance and the old - so if we are equally confident in both the previous mean and the new measurement, our best estimate for $\mu_1$ will split the difference.


Using this update function, it is easy to wrap a new estimate back into the prior distribution, which is the core idea behind Bayesian analysis. But how does it come into play with Kalman filtering?

APPLICATION
The Kalman filter can be boiled down to a few basic steps.
1. Predict a future value, based on previous information. (Predict)
2. Compare prediction and measured value, using results as the new previous information. (Update)
3. If you run out of data, or are satisfied with the estimate, exit. Otherwise, GOTO 1.

Pretty simple, right? This method can yield some very interesting results, cutting through measurement error and giving close approximation of the root signal.

From a high level, Bayesian derivation is

$P(X_i|Y_{1:i})    \alpha     P(Y_i|X_i,Y_{1:i-1}) P(X_i|Y_{1:i-1})$

Because Bayesian analysis usually wraps it's results back into the prior model,
this reduces to

$P(X_i|Y_{1:i})     \alpha     P(Y_i|X_i) P(X_i|Y_{1:i-1})$

What this means is we have a predictive function, $P(Y_i|X_i)$, which will be described a bit later, used in conjunction with a likelihood function, $P(X_i|Y_{1:i-1})$, which was described above. Together these two functions form the core of the Kalman filtering operation: Predict, Measure, Update, Repeat.

SETUP
The key things we need to know are:
The standard deviation of our source. I chose .6
The scale factor of the source. This can be calculated from
        $1 = (scale  factor)^2+(source  std  dev)^2$
The standard deviation of the measurements. I chose .5, but this is a parameter you can play around with. The higher the measurement sigma gets, the worse our filter performs, due to signal to noise ratio effects. An intial deviation of .5 is fairly high, so dropping this number to something like .25 will give "prettier" results.

In a real system, you could reasonably expect to have tolerances for both your source equipment and your measurement equipment. Maybe you could make measurements of this, or it could be found in the manufacturer's datasheet.

I will be showing this method through an example, using R. We will create a test case by following a few generic steps:
1. Create a source signal with some noise variance $X$.
2. Create a measurement signal, which will introduce measurement variance $Y$.
3. Kalman filter the measurement signal $k = filt(Y)$.
4. Compare the filtered data $k$ with the original transmission $X$.

So let's begin.



Step 1.
Create a source signal. I chose to generate a random walk.

randomwalk <- function(num, source_scale, source_sigma) {
    randy <- c(0,1:num)
    for(i in 2:length(randy)) {
        randy[i] <- source_scale*randy[i-1]+rnorm(1, sd=source_sigma)
    }
    return(randy[2:length(randy)])
}




Step 2.
Now, we need a function to create a measurement signal, which will further distort our input.

received <- function(sent, meas_sigma) {
    measured <- sent+rnorm(length(sent), sd=meas_sigma)
    return(measured)
}



Step 3.
The signal is pretty badly distorted now... time to write a Kalman filter function.

pred_mean <- function(source_scale, prev_mean) {
    return(source_scale*prev_mean)
}

pred_sigma <- function(source_scale, prev_sigma, source_sigma) {
    return(sqrt((source_scale**2)*(prev_sigma**2)+source_sigma**2))
}

update_mean <- function(pred_mean, pred_sigma, meas_val, meas_sigma) {
    numerator <- (pred_mean/(pred_sigma**2))+(meas_val/(meas_sigma**2))
    denominator <- (1/(pred_sigma**2))+(1/(meas_sigma**2))
    return(numerator/denominator)
}

update_sigma <- function(pred_sigma, meas_sigma) {
    r =(1/(pred_sigma**2))+(1/(meas_sigma**2))
    return(1/sqrt(r))
}


filt <- function(y, source_scale, source_sigma, meas_sigma) {
   last_mean <- 0
   last_sigma <- source_sigma
   k <- 1:length(y)
   for(i in 1:length(y)) {
       est_mean <- pred_mean(source_scale, last_mean)
       est_sigma <- pred_sigma(source_scale, last_sigma, source_sigma)
       k[i] <- est_mean+rnorm(1, sd=est_sigma)
       last_mean <- update_mean(est_mean, est_sigma, y[i], meas_sigma)
       last_sigma <- update_sigma(est_sigma, meas_sigma)
   }
   return(k)
}

A few quick derivations are required for this code to make sense. The update_mean and the update_variance functions were described at the start of the blog, but where on earth did the pred_mean and pred_sigma functions come from?
According to notation of the Kalman filter, we currently have
$X_i = \alpha_s X_{prev}+\omega_s$, where $\alpha_s$ is our source scale, and $\omega_s$ is the source standard deviation. In Kalman filter terms, this equation is the state model, and we got this from the knowledge we have about how the randomwalk values were generated.
The second equation, our space model, is $Y_i = X_i + \omega_m$, where $\omega_m$ is the measurement standard deviation. 
In order to get the equations for our pred_mean and pred_sigma we want to find the expected value and variance of the state model, which looks like this.
$E[X_i] = E[\alpha_s X_{prev}+\omega_s]$

using the rules of associativity for expected value...

$E[X_i] = \alpha_s E[X_{prev}]+E[\omega_s]$

The expected value of zero-mean noise is 0, so now we find

$E[X_i] = \alpha_s E[X_{prev}]$

This matches exactly with the code for pred_mean - our new guess is the old mean multiplied by the scale factor.
The pred_sigma function is a little trickier.
$Var(X_i) = Var(\alpha_s X_{prev}+\omega_s)$
Since $Var(Q)$ is the same as $E[(Q-E[Q])^2]$, we now see
$Var(X_i) =$
$E[(\alpha_s X_{prev}+\omega_s)^2]-E[\alpha_s X_{prev}+\omega_s]^2$
Expanding, this now becomes
$Var(X_i) =$
      $\alpha_s^2 E[X_{prev}^2]+2\alpha_s E[X_{prev}] E[\omega_s]+$
                  $E[\omega_s^2] - E[\alpha_s X{prev} + \omega_s]^2$

Continuing, we see that

$ - E[\alpha_s X{prev} + \omega_s]^2$ can be expanded to
$ - E[\alpha_s X_{prev}+\omega_s]E[\alpha X_{prev}+\omega_s]$

which becomes

$-(E[\alpha_s X_{prev}]+E[\omega_s])(E[\alpha X_{prev}]+E[\omega_s])$
 
FOIL, and we now see 


$-\alpha_s^2E[X_{prev}]^2-2\alpha_s E[\omega_s]E[X_{prev}]-E[\omega_s]^2$


One view of this expression is that $\omega_s$ is a normally distributed random variable with a mean of 0. $E[\omega_s]$ is 0, and squaring that is still 0, so both the $E[\omega_s]$ and $E[\omega_s]^2$ end up going to 0. It is important to note that $E[\omega_s^2]$ will NOT go to zero, as the mean of the squared distribution is no longer centered about 0, and instead becomes $\omega_s^2$.

However, another, more natural view is to note that $E[\omega_s^2]-E[\omega_s]^2$ is identical to $Var(\omega_s)$ which is actually  $\omega_s^2$. The $2\alpha_s$ terms cancel.

Either way, the final result is the same.

$Var(X_i) = \alpha_s^2 Var(X_{prev}) + \omega_s^2$

This matches the code for pred_sigma - square root of scale factor squared times the previous sigma + the measurement sigma is our best guess for the new sigma.  

See http://mathworld.wolfram.com/Variance.html for an alternate derivation that leads to the same result.



Step 4. 
Now we can write a function to tie it all together.

runit <- function() {
     source_sigma <- .01
     source_scale <- sqrt(1-source_sigma**2)
     meas_sigma <- .4
     x <- randomwalk(1000, source_scale, source_sigma)
     y <- received(x, meas_sigma)
     k <- filt(y, source_scale, source_sigma, meas_sigma)
     plot(y, type="l", col="red")
     lines(k, col="green")
     lines(x, col="blue")

}

The blue is the source signal, the red is the signal at measurement, and green is the recovered signal. Even when measurement noise washes out the root signal, we can recover the original fairly well.


Try tuning each of the sigmas, and see how the results change - it's pretty interesting.



There you have it! A simple, logical derivation of the Kalman filter as a recursive Bayesian filter. In the future I plan to write about more complex statistical processing methods as I learn them, such as how to run this simulation with 0 known parameters, or implementation of one of the non-linear Kalman filter algorithms.

As always, critique is both welcome and appreciated. 
 
CODE
To run this code, simply copy and paste into a source file. Then open an R interpreter, do source("path/to/source/file"), then do runit().

randomwalk <- function(num, source_scale, source_sigma) {
    randy <- c(0,1:num)
    for(i in 2:length(randy)) {
        randy[i] <- source_scale*randy[i-1]+rnorm(1, sd=source_sigma) 
    }
    return(randy[2:length(randy)])
}

received <- function(sent, meas_sigma) {
    measured <- sent+rnorm(length(sent), sd=meas_sigma)
    return(measured)
}

pred_mean <- function(source_scale, prev_mean) {
    return(source_scale*prev_mean)
}

pred_sigma <- function(source_scale, prev_sigma, source_sigma) {
    return(sqrt((source_scale**2)*(prev_sigma**2)+source_sigma**2))
}

update_mean <- function(pred_mean, pred_sigma, meas_val, meas_sigma) {
    numerator <- (pred_mean/(pred_sigma**2))+(meas_val/(meas_sigma**2))
    denominator <- (1/(pred_sigma**2))+(1/(meas_sigma**2))
    return(numerator/denominator)
}

update_sigma <- function(pred_sigma, meas_sigma) {
    r =(1/(pred_sigma**2))+(1/(meas_sigma**2))
    return(1/sqrt(r))
}

filt <- function(y, source_scale, source_sigma, meas_sigma) {
   last_mean <- 0
   last_sigma <- source_sigma
   k <- 1:length(y)
   for(i in 1:length(y)) {
       est_mean <- pred_mean(source_scale, last_mean)
       est_sigma <- pred_sigma(source_scale, last_sigma, source_sigma)
       k[i] <- est_mean+rnorm(1, sd=est_sigma)
       last_mean <- update_mean(est_mean, est_sigma, y[i], meas_sigma)
       last_sigma <- update_sigma(est_sigma, meas_sigma)
   }
   return(k)
}

runit <- function() {
    source_sigma <- .01
    source_scale <- sqrt(1-source_sigma**2)
    meas_sigma <- .4
    x <- randomwalk(1000, source_scale, source_sigma)
    y <- received(x, meas_sigma)
    k <- filt(y, source_scale, source_sigma, meas_sigma) 
    plot(y, type="l", col="red")
    lines(k, col="green")
    lines(x, col="blue")

}

Sunday, February 12, 2012

Copy-free Data From C In Python

BACKGROUND
Lately, I found myself doing heavy computational work in C, storing large volumes of data in C arrays. However, once the computational work has been done, I also find the need to explore the data, and decide how make algorithmic decisions about various features of the dataset. What I needed was an embedded interpreter.

Luckily, there is a ton of information out there about embedding interpreters into C and C++ code. Languages such as Lua, Tcl, JavaScript, and Python are all  well suited to this, and bring the ability to explore algorithmic changes in scripted code, without having to recompile each time. Lua especially has a large following in the gaming community, with well documented examples for usage in C and C++ code.

ATTEMPT WITH LUA
Lua has a very simple, stack based interface that works very well for simple tasks. However, it does require data to be copied when it is pushed onto the stack, so that the Lua environment has it's own copy which can be garbage collected. Function pointers can be pushed in the form of lightuserdata, but these pointers didn't appear to be of much use.

Due to the size of my dataset, any kind of copying after initialization in my C code is a big performance hit. The data transition to Lua was fast, but not fast enough. I needed another solution.

ATTEMPT WITH PURE PYTHON
The Python-C API is very powerful, and like most Python, extensively documented. The basic concept is to instantiate a Python environment, create a PyObject* in C, then run a script that uses the initialized PyObject. The PyObject* can be instantiated using different functions which create any Python object.

However, there is a problem. The default Python API requires data to be copied into a list, if the datasize and values are not known at compile time. This put me in a similar position as the Lua attempt, and copying my data to Python took even longer than the copy to Lua.

Things were grim, but there was one, last ray of hope... NumPy!
 
THE ANSWER
Using the NumPy-C API, it is possible to create a PyArray from a pointer. What this means is a pointer to the start of a data array can be passed into a function, and a PyObject of type NumPy array will be created, where the data content of the NumPy array is actually the C array! With no data copied across the boundary, I have both the flexibility of a scripted interface and the raw computational power of C, with near zero overhead.

The only downside is that the Python code will not handle freeing of the memory pointed to by the PyArray. This means you may have to free() the memory after the python code has run in your C code, if you allocated it using malloc. You should also be extra careful not to free the memory while trying to look at it from Python. Once again, trading complexity for speed has it's benefits.

I have some (excessively?) strong beliefs that NumPy may be the future of data processing, but I think I'll keep that sermon to myself.

REQUIRED PACKAGES AND COMPILE ARGUMENTS
On Ubuntu 11.10, I need the following packages: python-dev, and python-numpy. I acquired them by doing sudo apt-get install python-dev python-numpy

To compile, I used:
gcc -O3 -std=c99 -I$NUMPY test.c -lpython2.7
    where NUMPY was previously declared using
    export NUMPY=/usr/lib/pymodules/python2.7/numpy/core/include/numpy/

Once compiled, I simply did ./a.out test.py
    where test.py was the name of my Python file

This should print out the numbers 0-999, which are the numbers we stored in the C code. We are accessing C code values, directly in Python, without copying the data. Nice!

THE PYTHON CODE 
for i in data:
    print i

THE C CODE 
#include <python2.7/Python.h>
#include <arrayobject.h>

int main(int argc, char** argv) {
    if (argc < 2) {
        printf("This program requires a path to a Python file as an argument!\n");
    }
    //Initialize general Python environment
    Py_Initialize();
    //Initialize NumPy 
    import_array();

    //Create array of values
    int size = 1000;
    int vals[size];
    for (int i=0; i<size; i++) {
        vals[i] = i;
    }
    
    //Indicate that the array is 1-D
    npy_intp dims[1] = {size}; 

    //Create NumPy array that shares data with vals
    PyObject* data = PyArray_SimpleNewFromData(1,
                                                                                    dims,
                                                                                    NPY_INT,
                                                                                    &vals[0]);
    
    //Add data array to globally accessible space
    PyObject* m = PyImport_AddModule("__main__");
    PyModule_AddObject(m, "data", data);

    //Optional ability to make data read-only. Nice if you want to run multiple
    //python files over the same base data
    PyRun_SimpleString("data.setflags(write=False)");
    PyObject* py_f =PyFile_FromString(argv[1], "r");
    PyRun_SimpleFile(PyFile_AsFile(py_f), argv[1]);
    Py_Finalize();
    return 0;
}