Wednesday, November 21, 2012

Loading Ubuntu 12.10 on a Macbook Air

To create a live USB for installation, I had to follow the instructions at the link below. The basic steps were to convert the .iso to a .img using diskutils, then dd that image onto a USB driver, which was formatted as FAT_32 in my case. I think other formatting schemes may work as well.

Update: Trying to do this from non-Mac platforms, one can try
mkfs.hfsplus to create a .dmg file. To do this, you will need to install the hfsplus hfsprogs and hfsutils packages. you can then install dmg2img to convert this to .img format by installing dmg2img.
The whole command is sudo apt-get install hfsplus hfsprogs hfsutils dmg2img 
See for more details

Create a blank file of the size needed
dd if=/dev/zero of=/path/to/output.dmg bs=1 count=0 seek=590M
mkfs.hfsplus -v 'Description of DMG file' output.dmg

mount it, copy all files from ISO to DMG, then unmount.

Once the live USB is created, simply reboot the Macbook Air while holding the alt (option) key, then select the yellow USB UEFI icon as the boot device. Make sure to install 3rd party sources when running the installer, as the Broadcom wireless device uses a 3rd party firmware blob.

After running the Ubuntu installer, everything should be running, but there are still a few usability issues to fix. I will cover these issues under separate headings below.

Installing XMonad is the next crucial step for me - I don't install a wireless manager, doing wifi management with wpa_supplicant instead.

sudo apt-get install xmonad xmobar suckless-tools

Suckless-tools is a package of helpful tools for xmonad, such as dmenu. I use dmenu for launching graphical programs like firefox or
I have two example configs for a MacBook Air running XUbuntu 12.04, located at .Simply put the ~.xmobarrc file at ~/.xmobarrc, and put the xmonad.hs file in ~/.xmonad/xmonad.hs

Wireless in XMonad is ...interesting. Here I will document some common wireless connectivity scenarios, and how get on the net.

Easiest - install trayer by doing sudo apt-get install trayer
Now simply do
trayer & 
to start the network manager applet in the bottom left hand corner. Simply click and connect as normal. When finished, kill trayer by doing pkill trayer

After installing Ubuntu 12.10 and configuring XMonad, things appeared pretty good. The only issue so far is that the "right click" of the mousepad is two finger click, instead of clicking in the bottom right hand corner of the touchpad.

Tuesday, November 20, 2012

Using the Symmetricom bc635PCIe card as an NTP timesource

NTP timeservers are a crucial part of any large-scale, distributed system, especially if the components of the system are spread across multiple sites. There are many commercial and homebrew options for setting up an NTP timesource. Some are fairly simple to set up, and others (like the Symmetricom bc635 PCIe card) require a little more work.

I have recently set up an NTP server using the aforementioned card, and after all my troubles to get it set up, I decided to document the procedures used in this blog post.

The default ntpd that comes with CentOS 6.2 does not have support for the BANCOMM/Symmetricom PCIe GPS card.

Download the latest ntp source from .For this tutorial, I used version 4.2.6p5

There is a PDF from Symmetricom support that outlines most of the steps necessary to fix the driver. This list is largely a copy of their "how-to". However, I have elaborated on a few of the steps where I had issues.
  1. Install the bc635/637 driver. To do this, either run make install OR cp the windrvr6.ko and windrvr6_usb.ko files into the /lib/modules/drivers/ area, then do a depmod -a. There is a script called wdreg that will load up the driver, wdreg windrvr6. I added a call to the wdreg script in /etc/rc.local in order to get the module loading on boot, one could also add a rule to /etc/modules.conf.
  2. To verify, run the bc63xPCIcfg file that comes with the driver, from the   sample/ folder.
  3. After rebooting the computer, the sample program should still work.
  4. Copy the file from the sample/ folder to /usr/lib/ (or another library directory of your choice).
  5. Make a link to the device driver, ln -s /dev/windrvr6 /dev/btfp0 (btfp0 is the   old /dev file that ntpd looks for)
  6. If you haven't already done so, download the ntpd source code from
  7. I used the ntp-4.6.2p5 source for this, but the 4.2.6p2 source has been used by others
  8. Extract the source, then modify the file in the ntpd/ folder. Find the segment that has ntpd_LDADD = $(LDADD) $(LIBOPTS_LDADD) ../libntp/libntp.a -lm @LCRYPTO@ @LSCF@, and add -lbcsdk  
  9. Run this command in the ntpd/ directory - ./configure --enable-BANCOMM --enable-linuxcaps
  10. Look at the Makefile (NOT There should be a line like  ntpd_LDADD = $(LDADD) $(LIBOPTS_LDADD) ../libntp/libntp.a -lm -lbcsdk If not, add the -lbcsdk here, but know that running ./configure again will overwrite this Makefile.
  11. Edit the refclock_bancomm.c file as follows
At line 182
extern uint32_t __attribute__ ((weak)) bcReadBinTimeEx(SYMMT_PCI_HANDLE, uint64_t *, uint64_t*, uint32_t*, uint8_t*);

At line 440
  case 2:                         /* Linux/Windows, PCI, 2 32bit time words */
                            struct tm temp;
                            uint64_t btm[2];
                            uint32_t nano;
                            uint8_t dmy;
                            if (bcReadBinTimeEx(stfp_handle, &btm[1], &btm[0], &nano, &dmy) == 0)
                                msyslog(LOG_ERR, "get_datumtime error: %m");
                            temp = *gmtime((time_t *)&btm[1]);
                            time_vme->year = (unsigned short)temp.tm_year+1900;
                            time_vme->day = (unsigned short)temp.tm_yday+1;
                            time_vme->hr = (unsigned short)temp.tm_hour;
                            time_vme->mn = (unsigned short)temp.tm_min;
                            time_vme->sec = (unsigned short)temp.tm_sec;
                            time_vme->frac = (btm[0] & 0x000FFFFF) * 1000;
                            time_vme->frac += (nano & 0x0000000F) * 100;

                            time_vme->status = (unsigned short)dmy;

                            #ifdef DEBUG
                                printf("Epoch time read from the BANCOMM card -> %ld\n", btm[1]);

                                printf("Decimal time read from the BANCOMM card -> Day:%d Year:%d H%02d:M%02d:S%02d.%06lu%d Status:%d\n",

                                printf("Decimal time read into time_vme -> Day:%hu Year:%hu H%hu:M%hu:S%hu.%lu Status:%hu\n",
                            //tvme_fill(time_vme, btm);

One can remove all the commented lines, as well as the lines surrounded by
#ifdef DEBUG, but I have left them here for completeness.

Now run make. This should build a ntpd executable that will work with the library in order to read time from the Symmetricom PCIe GPS. If anyone knows how to handle the nanoseconds field, please let me know and I will update this post!

I then had to modify ntp.conf to use the device

# No restrictions on loopback

#Allow systems to sync
restrict mask nomodify notrap
restrict mask nomodify notrap

#Setup GPS PCIe Clock
restrict mask
server prefer mode 2 iburst minpoll 4 maxpoll 4

#Local clock source
#fudge stratum 8

driftfile /var/lib/ntp/drift
authenticate no

Here the server 127.127.16.x is a special subnet which refers to the Symmetricom(or BANCOMM) hardware clock when ntpd is compiled to support this radio.

The mode keyword is specific to each hardware radio. For Symmetricom, I have been unable to find information on what this mode means, but for Meinberg

Assuming that Symmetricom has the same convention, this would mean it is using a time string to send the information.
iburst allows the computer to sync more quickly with GPS time.
minpoll and maxpoll specify a time in seconds to poll the timesource for

For CentOS 6.2, I had to replace the existing ntpd in /usr/sbin/ntpd with the custom built version.

I have also had problems with ntpd not starting on boot -
I had to manually add a call starting ntpd to /etc/rc.local even though chkconfig showed ntpd on. It seems that ntpd starts but crashes? To be continued as I explore the issue further.

Running ntpq -p should show the GPS clock, and you should see the reach column increase as the computer syncs with the card. If you see things working, but then an x appears in front of the entry for the card, that means there is another ntpd source that disagrees time-wise.

I have seen this in the past, which worries me a little, as I am supposedly pulling time straight from GPS. Maybe there is a bug in my code - or maybe the other server was wrong? In any case, using the GPS card as the only time source works great and has greatly decreased the footprint of my system.

    Sunday, September 16, 2012

    Programming the ATTiny25 with Ubuntu

    To get all of the libraries and packages necessary to put a basic program on your AVR chip, run the following command.
    sudo apt-get install gcc-avr avrdude avr-libc

    ATTiny25 is plugged into a 40 pin ZIF socket, which is wired for HVSP mode. See this video for ZIF socket installation instructions

    Other programming modes for AVR chips include PP and ISP - see this [PDF] link for pin wiring in order to program various AVR chips.

    avr-gcc -Os -o hello.elf -mmcu=avr25 hello.c
    avr-objcopy -j .text -O ihex hello.elf hello.hex
    sudo avrdude -p t25 -c dragon_hvsp -P usb -e -B10 -U flash:w:hello.hex

    If wired for ISP or PP mode, you should replace dragon_hvsp in the last step with dragon_isp or dragon_pp .

    If you see this error:
    jtagmkII_close(): bad response to GO command: RSP_ILLEGAL_EMULATOR_MODE

    This error is actually OK - apparently HVSP mode shouldn't respond to this command, but avrdude sends it in ALL programming modes - see this link

    /etc/avrdude.conf has a lot of (specific) information about each supported chip.


    Thursday, September 13, 2012

    Writing better C/C++ code

    // forward declaration of our struct before it's definition
    struct ClassA;
    // The actual function table that holds function pointers.
    typedef struct {
        void (*ClassA)(struct ClassA*); // the "constructor"
        void (*set)(struct ClassA*); // set function
        int (*get)(struct ClassA*); // get function
    } ClassA_functiontable;
    typedef struct ClassA {
        int data;
        ClassA_functiontable *vtable; // ClassA virtual method table
    } ClassA;
    // ClassA's function prototypes (forward declarations)
    void ClassA_constructor(ClassA *this);
    void ClassA_set(ClassA *this);
    int ClassA_get(ClassA *this);
    // Static array of the function table struct that contains ClassA's functions.
    ClassA_functiontable ClassA_vtable = {ClassA_constructor,
                                      ClassA_get };
    // Implementation of functions and the constructor.                              
    void ClassA_constructor(ClassA *this) {
        this->vtable = &ClassA_vtable;
        this->data = 10;
    void ClassA_set(ClassA *this) {
        printf("ClassA is increasing\n");
    int ClassA_get(ClassA *this) {
        return this->data;
    To compile Qt related code with g++ (on Ubuntu 11.10+)
    g++ -I /usr/include/qt4 -I /usr/include/qt4/QtCore -l QtCore test.cpp test.h -o test.test
    qmake -project
    Then fill in the resulting .pro file - much easier

    _Generic macros... awesome

    Set a single function to be called any time exit() is run

    Ensuing discussion with good error checking macros

    Excellent list of good compile time options

    Example of parsing commandline arguments for C or C++ code
    void ShowUsageAndExit()
         float VERSION=1.0;
         printf("Special sample code v%f\n",VERSION);
         printf("Usage: specsamp [options]\n");
         printf("\t -h .......... prints this help\n");

    int main(int argc, char** argv)
        int arg=1;
        while(arg<argc) {
            if(argv[arg][0] != '-') {
                 case 'h':

    A quick sample of how to write C modules that will compile fine with a C++ compiler as well. For header file named test.h, something like this would be expected:
    #ifndef _TEST_H_
    #define _TEST_H_
    #ifdef __cplusplus
    extern "C" {



    #ifdef __cplusplus
    #endif //_TEST_H_

    Use a function like this to safely malloc: 
    void* SafeMalloc(size_t sz)
       void *m = malloc(sz);
          fprintf(stderr, "Malloc failed for size %zd\n", sz);
       return m;

    For passing fields to subfunctions, malloc() and free() is the way to go (or new and delete if using C++ != C++11)

    Avoiding branching for GPGPU, instead of:
        { out[i] = f(i); }
        { out[i] = g(i); }

    Use this:
    out[i] = f[i ] + cond[i] *(g[i]-f[i])

    You compute everything - just as in the "standard" pattern - and you have another multiply-and-add, but you have only one commit to memory. It's sometimes better even in nested conditions.
    (Not my words, originally commented by David on this post)

    (gdb) define xxd
    >dump binary memory dump.bin \$arg0 \$arg0+\$arg1
    >shell xxd dump.bin
    (gdb) xxd &main 10
    0000000: 0000 0000 0000 0000 0000 0000 4d8c a7f7 ............M...
    0000010: ff7f 0000 0000 0000 0000 0000 c8d7 ffff ................
    0000020: ff7f 0000 0000 0000

    Monday, July 9, 2012

    Loading Debian on a WM8650 netbook

    This is the netbook I purchased - at 70 dollars it was a pretty great deal if you like to hack around and put Linux on things.

    I was inspired by a post on Hack A Day about putting ArchLinux on one of these netbooks - I like Debian more, so I figured I would start with the rootfs and eventually try to get a custom kernel going. This post will walk through the rootfs setup side of this process.

    Using the instructions from here
    I got an SDCARD with Debian to boot on my system. It took a very long time to boot, but it did come up eventually. However, there were a few issues on my hardware.

    First major problem - keyboard wasn't working correctly. From a post deep in this webpage...

    I finally was successful to get the keyboard problem solved.

    In a first desperate attempt I changed the /etc/X11/xorg.conf file

    Section "InputDevice"
       Identifier "keyboard"
       Driver "evdev"
       Option "Device" "/dev/input/event1"
    #           Option "Device" "/dev/input/event2"

    Then the keyboard worked BUT NOT the touchpad any more.
    So I changed :

    Section "InputDevice"
       Identifier "utk_touch"
       Driver "evdev"
       Option "Device" "/dev/input/event2"
    #Option "Device" "/dev/input/event3"

    This fix solved the keyboard problem for me.

    To fix the wifi, go to the start menu, under internet, then do WIFI ON.
    The output of lsusb should mention Ralink Technology Corp. RT5370

    cd /lib/modules/
    modprobe -r rt3070sta
    rm rt3070sta.ko

    cp rt3070sta_rt5370.ko rt3070sta.ko

    Next, open into /usr/local/sbin/wifi-on (nano /usr/local/sbin/wifi-on), and commented out the line that starts the network manager. I also added a line for ifconfig ra0 up.

    Edit /usr/local/sbin/wifi-disconnect, and remove the line that turned off network-manger.

    Next restart the netbook. Now when you do WIFI ON as above, and then ifconfig, you should see a device labeled ra0. This is the wifi device.
    Connecting to your network will require the use of wpa_supplicant

    First, generate a .conf file for the network you want to connect to by doing
    wpa_passphrase <ssid> [passphrase] > {configfile}. In my case this was something like
    wpa_passphrase "MyWifiNetwork" "coolcoolschool" > home.conf

    Next, associate with the wifi network by doing
    wpa_supplicant -Dwext -ira0 -chome.conf -B
    Finally, to get an ipaddress, do
    dhclient ra0

    You can also scan the surrounding area for networks to join by doing iwlist ra0 scanning


    Sunday, July 8, 2012

    Loading Ubuntu on the Asus Transformer

    Equipment needed
    ASUS Transform TF101 running Android 4.0.3
    USB tether cable for Transformer
    MiniSD card (or adapter for a microSD+microSD card)

    Plug the cable from your Transformer to the host PC. I kept this cable connected throughout the entire process.

    To install Ubuntu, I first needed root access to the Android system. By following the instructions here , I was able to root my device. I had to download the flash kit from here instead of the link provided in the instructions. Basically just select option 1 and the program will take care of itself.

    Next I needed to install a program called ClockworkMod Recovery. Using the instructions here I installed CWM by downloading the RecoveryInstaller.apk file, copying it to directory where I unzipped the primetime rootkit from the previous step, then doing ./adb install RecoveryInstaller.apk

    Make sure to have an SDCARD that is partitioned as ext2 - I had some trouble originally with a FAT32 formatted card.

    Create the kernelblob using the instructions from the first post here
    Get ubuntu.img from this link
    I used the OLIFE zImage (2636-zImage), so YMMV with other kernels.
    Now copy both the kernelblob and ubuntu.img onto your SDCARD, insert the SDCARD into your Transformer, and boot the Transformer into CWM recovery mode.
    REMINDER - From the Android Authority link above -
    To manually boot into ClockworkMod Recovery, power off your Asus Eee Pad Transformer. Press and hold both Volume Down and Power buttons. When you see words scrolling at the top left of the tablet, immediately push Volume Up within 5 seconds.

    Once the transformer is in recovery mode, get an adb shell up by doing
    ./adb shell from the PC.

    This shell should have a # sign - which indicates you have root privileges. Run the following commands from the prompt.
    mount /dev/block/sda1 /sdcard
    cd /sdcard
    dd if=kernelblob of=/dev/block/mmcblk0p4
    dd if=ubuntu.img of=/dev/block/mmcblk0p7 

    The last dd command will take some time, but once it is done, you can select Reboot System from the CWM prompt on your Transformer. The first time you boot, it may take a long time (something about on-line partition shuffling?) but eventually you will see a standard Ubuntu first run setup prompt.

    Currently the only issues I have found are - no sound and no flash. There are fixes for these on the xda-developer forums but I have not had time to do them yet.

    There may come a time when you wish to go "back to stock". Fear not! Simply follow the link below.

    Note: I had to format my micro SDCARD to FAT32 for this to work... very strange since I had the opposite problem before.


    Sunday, February 19, 2012

    A Bayesian Approach To Kalman Filtering

    Ripped straight from Wikipedia...
    The Kalman filter is an algorithm, commonly used since the 1960s for improving vehicle navigation (among other applications, although aerospace is typical), that yields an optimized estimate of the system's state (e.g. position and velocity). The algorithm works recursively in real time on streams of noisy input observation data (typically, sensor measurements) and filters out errors using a least-squares curve-fit optimized with a mathematical prediction of the future state generated through a modeling of the system's physical characteristics.


    There are both linear and non-linear forms of the Kalman filter, with the non-linear forms being the Extended Kalman Filter (EKF), the invariant Extended Kalman Filter, and the Unscented Kalman Filter (UKF). The non-linear versions of this algorithm represent the current "gold standard" in many application areas, including GPS and navigation.

    Before jumping in the deep end of the pool, I decided to implement a simple example that shows the ideas and implementation of Kalman filtering, using a recursive Bayesian approach.


    The Kalman filter is often derived from a matrix equation standpoint. The math, at least to me, is long, involved, and fairly nasty to solve without the help of some MATLAB matrix witchery. However, there is also a second, more "gut level" way to approach the Kalman filter - by approaching it as a case of recursive Bayesian filtering.

    The derivation for the following equations can be found on pg. 47 of Bayesian Data Analysis, Second Edition, by Gelman, Carlin, Stern, and Rubin.

    Basically, it is possible estimate the mean of a normal distribution by following a weighting equation for mean and variance, which can be represented as follows.



    Here, $\mu_1$ represents the new estimate for the mean and $\tau_1^2$ is the estimate for the new variance. $\mu_0$ represents the old mean, $\tau_0^2$ represents the old variance, $y$ is the new measured value, and $\sigma^2$ is the measurement variance. I will do an example later on to demonstrate this, but first, let's try to decide what this equation is really saying.

    If the old value for variance, $\tau_0^2$, is very large, then $u_0$ is multiplied by a very small number, effectively making the equation...

    $\Large{\mu_1 = \frac{\frac{1}{\sigma^2}y}{\frac{1}{\sigma^2}}}$

    which can be further reduced to simply...

    $\Large{\mu_1 = y}$

    What this means, is that if we are not very confident ($\tau_0^2$ is large) of our old mean, $\mu_0$, our best guess for $\mu_1$ is simply the value we measured, $y$. Our new variance also reduces to


    Conversely, if we are not very confident in the new result $y$, our new mean $\mu_1$ is basically the same as $\mu_0$. The new mean is based largely on the weighting between the new variance and the old - so if we are equally confident in both the previous mean and the new measurement, our best estimate for $\mu_1$ will split the difference.

    Using this update function, it is easy to wrap a new estimate back into the prior distribution, which is the core idea behind Bayesian analysis. But how does it come into play with Kalman filtering?

    The Kalman filter can be boiled down to a few basic steps.
    1. Predict a future value, based on previous information. (Predict)
    2. Compare prediction and measured value, using results as the new previous information. (Update)
    3. If you run out of data, or are satisfied with the estimate, exit. Otherwise, GOTO 1.

    Pretty simple, right? This method can yield some very interesting results, cutting through measurement error and giving close approximation of the root signal.

    From a high level, Bayesian derivation is

    $P(X_i|Y_{1:i})    \alpha     P(Y_i|X_i,Y_{1:i-1}) P(X_i|Y_{1:i-1})$

    Because Bayesian analysis usually wraps it's results back into the prior model,
    this reduces to

    $P(X_i|Y_{1:i})     \alpha     P(Y_i|X_i) P(X_i|Y_{1:i-1})$

    What this means is we have a predictive function, $P(Y_i|X_i)$, which will be described a bit later, used in conjunction with a likelihood function, $P(X_i|Y_{1:i-1})$, which was described above. Together these two functions form the core of the Kalman filtering operation: Predict, Measure, Update, Repeat.

    The key things we need to know are:
    The standard deviation of our source. I chose .6
    The scale factor of the source. This can be calculated from
            $1 = (scale  factor)^2+(source  std  dev)^2$
    The standard deviation of the measurements. I chose .5, but this is a parameter you can play around with. The higher the measurement sigma gets, the worse our filter performs, due to signal to noise ratio effects. An intial deviation of .5 is fairly high, so dropping this number to something like .25 will give "prettier" results.

    In a real system, you could reasonably expect to have tolerances for both your source equipment and your measurement equipment. Maybe you could make measurements of this, or it could be found in the manufacturer's datasheet.

    I will be showing this method through an example, using R. We will create a test case by following a few generic steps:
    1. Create a source signal with some noise variance $X$.
    2. Create a measurement signal, which will introduce measurement variance $Y$.
    3. Kalman filter the measurement signal $k = filt(Y)$.
    4. Compare the filtered data $k$ with the original transmission $X$.

    So let's begin.

    Step 1.
    Create a source signal. I chose to generate a random walk.

    randomwalk <- function(num, source_scale, source_sigma) {
        randy <- c(0,1:num)
        for(i in 2:length(randy)) {
            randy[i] <- source_scale*randy[i-1]+rnorm(1, sd=source_sigma)

    Step 2.
    Now, we need a function to create a measurement signal, which will further distort our input.

    received <- function(sent, meas_sigma) {
        measured <- sent+rnorm(length(sent), sd=meas_sigma)

    Step 3.
    The signal is pretty badly distorted now... time to write a Kalman filter function.

    pred_mean <- function(source_scale, prev_mean) {

    pred_sigma <- function(source_scale, prev_sigma, source_sigma) {

    update_mean <- function(pred_mean, pred_sigma, meas_val, meas_sigma) {
        numerator <- (pred_mean/(pred_sigma**2))+(meas_val/(meas_sigma**2))
        denominator <- (1/(pred_sigma**2))+(1/(meas_sigma**2))

    update_sigma <- function(pred_sigma, meas_sigma) {
        r =(1/(pred_sigma**2))+(1/(meas_sigma**2))

    filt <- function(y, source_scale, source_sigma, meas_sigma) {
       last_mean <- 0
       last_sigma <- source_sigma
       k <- 1:length(y)
       for(i in 1:length(y)) {
           est_mean <- pred_mean(source_scale, last_mean)
           est_sigma <- pred_sigma(source_scale, last_sigma, source_sigma)
           k[i] <- est_mean+rnorm(1, sd=est_sigma)
           last_mean <- update_mean(est_mean, est_sigma, y[i], meas_sigma)
           last_sigma <- update_sigma(est_sigma, meas_sigma)

    A few quick derivations are required for this code to make sense. The update_mean and the update_variance functions were described at the start of the blog, but where on earth did the pred_mean and pred_sigma functions come from?
    According to notation of the Kalman filter, we currently have
    $X_i = \alpha_s X_{prev}+\omega_s$, where $\alpha_s$ is our source scale, and $\omega_s$ is the source standard deviation. In Kalman filter terms, this equation is the state model, and we got this from the knowledge we have about how the randomwalk values were generated.
    The second equation, our space model, is $Y_i = X_i + \omega_m$, where $\omega_m$ is the measurement standard deviation. 
    In order to get the equations for our pred_mean and pred_sigma we want to find the expected value and variance of the state model, which looks like this.
    $E[X_i] = E[\alpha_s X_{prev}+\omega_s]$

    using the rules of associativity for expected value...

    $E[X_i] = \alpha_s E[X_{prev}]+E[\omega_s]$

    The expected value of zero-mean noise is 0, so now we find

    $E[X_i] = \alpha_s E[X_{prev}]$

    This matches exactly with the code for pred_mean - our new guess is the old mean multiplied by the scale factor.
    The pred_sigma function is a little trickier.
    $Var(X_i) = Var(\alpha_s X_{prev}+\omega_s)$
    Since $Var(Q)$ is the same as $E[(Q-E[Q])^2]$, we now see
    $Var(X_i) =$
    $E[(\alpha_s X_{prev}+\omega_s)^2]-E[\alpha_s X_{prev}+\omega_s]^2$
    Expanding, this now becomes
    $Var(X_i) =$
          $\alpha_s^2 E[X_{prev}^2]+2\alpha_s E[X_{prev}] E[\omega_s]+$
                      $E[\omega_s^2] - E[\alpha_s X{prev} + \omega_s]^2$

    Continuing, we see that

    $ - E[\alpha_s X{prev} + \omega_s]^2$ can be expanded to
    $ - E[\alpha_s X_{prev}+\omega_s]E[\alpha X_{prev}+\omega_s]$

    which becomes

    $-(E[\alpha_s X_{prev}]+E[\omega_s])(E[\alpha X_{prev}]+E[\omega_s])$
    FOIL, and we now see 

    $-\alpha_s^2E[X_{prev}]^2-2\alpha_s E[\omega_s]E[X_{prev}]-E[\omega_s]^2$

    One view of this expression is that $\omega_s$ is a normally distributed random variable with a mean of 0. $E[\omega_s]$ is 0, and squaring that is still 0, so both the $E[\omega_s]$ and $E[\omega_s]^2$ end up going to 0. It is important to note that $E[\omega_s^2]$ will NOT go to zero, as the mean of the squared distribution is no longer centered about 0, and instead becomes $\omega_s^2$.

    However, another, more natural view is to note that $E[\omega_s^2]-E[\omega_s]^2$ is identical to $Var(\omega_s)$ which is actually  $\omega_s^2$. The $2\alpha_s$ terms cancel.

    Either way, the final result is the same.

    $Var(X_i) = \alpha_s^2 Var(X_{prev}) + \omega_s^2$

    This matches the code for pred_sigma - square root of scale factor squared times the previous sigma + the measurement sigma is our best guess for the new sigma.  

    See for an alternate derivation that leads to the same result.

    Step 4. 
    Now we can write a function to tie it all together.

    runit <- function() {
         source_sigma <- .01
         source_scale <- sqrt(1-source_sigma**2)
         meas_sigma <- .4
         x <- randomwalk(1000, source_scale, source_sigma)
         y <- received(x, meas_sigma)
         k <- filt(y, source_scale, source_sigma, meas_sigma)
         plot(y, type="l", col="red")
         lines(k, col="green")
         lines(x, col="blue")


    The blue is the source signal, the red is the signal at measurement, and green is the recovered signal. Even when measurement noise washes out the root signal, we can recover the original fairly well.

    Try tuning each of the sigmas, and see how the results change - it's pretty interesting.

    There you have it! A simple, logical derivation of the Kalman filter as a recursive Bayesian filter. In the future I plan to write about more complex statistical processing methods as I learn them, such as how to run this simulation with 0 known parameters, or implementation of one of the non-linear Kalman filter algorithms.

    As always, critique is both welcome and appreciated. 
    To run this code, simply copy and paste into a source file. Then open an R interpreter, do source("path/to/source/file"), then do runit().

    randomwalk <- function(num, source_scale, source_sigma) {
        randy <- c(0,1:num)
        for(i in 2:length(randy)) {
            randy[i] <- source_scale*randy[i-1]+rnorm(1, sd=source_sigma) 

    received <- function(sent, meas_sigma) {
        measured <- sent+rnorm(length(sent), sd=meas_sigma)

    pred_mean <- function(source_scale, prev_mean) {

    pred_sigma <- function(source_scale, prev_sigma, source_sigma) {

    update_mean <- function(pred_mean, pred_sigma, meas_val, meas_sigma) {
        numerator <- (pred_mean/(pred_sigma**2))+(meas_val/(meas_sigma**2))
        denominator <- (1/(pred_sigma**2))+(1/(meas_sigma**2))

    update_sigma <- function(pred_sigma, meas_sigma) {
        r =(1/(pred_sigma**2))+(1/(meas_sigma**2))

    filt <- function(y, source_scale, source_sigma, meas_sigma) {
       last_mean <- 0
       last_sigma <- source_sigma
       k <- 1:length(y)
       for(i in 1:length(y)) {
           est_mean <- pred_mean(source_scale, last_mean)
           est_sigma <- pred_sigma(source_scale, last_sigma, source_sigma)
           k[i] <- est_mean+rnorm(1, sd=est_sigma)
           last_mean <- update_mean(est_mean, est_sigma, y[i], meas_sigma)
           last_sigma <- update_sigma(est_sigma, meas_sigma)

    runit <- function() {
        source_sigma <- .01
        source_scale <- sqrt(1-source_sigma**2)
        meas_sigma <- .4
        x <- randomwalk(1000, source_scale, source_sigma)
        y <- received(x, meas_sigma)
        k <- filt(y, source_scale, source_sigma, meas_sigma) 
        plot(y, type="l", col="red")
        lines(k, col="green")
        lines(x, col="blue")


    Sunday, February 12, 2012

    Copy-free Data From C In Python

    Lately, I found myself doing heavy computational work in C, storing large volumes of data in C arrays. However, once the computational work has been done, I also find the need to explore the data, and decide how make algorithmic decisions about various features of the dataset. What I needed was an embedded interpreter.

    Luckily, there is a ton of information out there about embedding interpreters into C and C++ code. Languages such as Lua, Tcl, JavaScript, and Python are all  well suited to this, and bring the ability to explore algorithmic changes in scripted code, without having to recompile each time. Lua especially has a large following in the gaming community, with well documented examples for usage in C and C++ code.

    Lua has a very simple, stack based interface that works very well for simple tasks. However, it does require data to be copied when it is pushed onto the stack, so that the Lua environment has it's own copy which can be garbage collected. Function pointers can be pushed in the form of lightuserdata, but these pointers didn't appear to be of much use.

    Due to the size of my dataset, any kind of copying after initialization in my C code is a big performance hit. The data transition to Lua was fast, but not fast enough. I needed another solution.

    The Python-C API is very powerful, and like most Python, extensively documented. The basic concept is to instantiate a Python environment, create a PyObject* in C, then run a script that uses the initialized PyObject. The PyObject* can be instantiated using different functions which create any Python object.

    However, there is a problem. The default Python API requires data to be copied into a list, if the datasize and values are not known at compile time. This put me in a similar position as the Lua attempt, and copying my data to Python took even longer than the copy to Lua.

    Things were grim, but there was one, last ray of hope... NumPy!
    Using the NumPy-C API, it is possible to create a PyArray from a pointer. What this means is a pointer to the start of a data array can be passed into a function, and a PyObject of type NumPy array will be created, where the data content of the NumPy array is actually the C array! With no data copied across the boundary, I have both the flexibility of a scripted interface and the raw computational power of C, with near zero overhead.

    The only downside is that the Python code will not handle freeing of the memory pointed to by the PyArray. This means you may have to free() the memory after the python code has run in your C code, if you allocated it using malloc. You should also be extra careful not to free the memory while trying to look at it from Python. Once again, trading complexity for speed has it's benefits.

    I have some (excessively?) strong beliefs that NumPy may be the future of data processing, but I think I'll keep that sermon to myself.

    On Ubuntu 11.10, I need the following packages: python-dev, and python-numpy. I acquired them by doing sudo apt-get install python-dev python-numpy

    To compile, I used:
    gcc -O3 -std=c99 -I$NUMPY test.c -lpython2.7
        where NUMPY was previously declared using
        export NUMPY=/usr/lib/pymodules/python2.7/numpy/core/include/numpy/

    Once compiled, I simply did ./a.out
        where was the name of my Python file

    This should print out the numbers 0-999, which are the numbers we stored in the C code. We are accessing C code values, directly in Python, without copying the data. Nice!

    for i in data:
        print i

    #include <python2.7/Python.h>
    #include <arrayobject.h>

    int main(int argc, char** argv) {
        if (argc < 2) {
            printf("This program requires a path to a Python file as an argument!\n");
        //Initialize general Python environment
        //Initialize NumPy 

        //Create array of values
        int size = 1000;
        int vals[size];
        for (int i=0; i<size; i++) {
            vals[i] = i;
        //Indicate that the array is 1-D
        npy_intp dims[1] = {size}; 

        //Create NumPy array that shares data with vals
        PyObject* data = PyArray_SimpleNewFromData(1,
        //Add data array to globally accessible space
        PyObject* m = PyImport_AddModule("__main__");
        PyModule_AddObject(m, "data", data);

        //Optional ability to make data read-only. Nice if you want to run multiple
        //python files over the same base data
        PyObject* py_f =PyFile_FromString(argv[1], "r");
        PyRun_SimpleFile(PyFile_AsFile(py_f), argv[1]);
        return 0;