In summary, convolutional VAE and batch normalization seem very useful for fast training of VAE models for image data. Unfortunately there appear to be some Theano related issues with the current implementation of batch normalization, which cause the generated computation graph to ~100x slower for the convolutional version than the feedforward version. Eventually the convolutional version should be very efficient, but for now training times of two or three days are not as useful as the 40 minute training times for the feedforward model.

Semi-supervised VAE is also a very promising avenue for learning better generative models, but implementing the model proposed earlier will take time after the course is over. For now, all the code remains posted at https://github.com/kastnerkyle/ift6266h15 , with improvements to both batch normalization and the convolutional model, and soon semi-supervised VAE.

# KK... JK... OK : /

A blog about things.

## Sunday, April 19, 2015

## Tuesday, April 14, 2015

### IFT6266 Week 11

Adding rescaling rmsprop with nesterov momentum as the optimizer, instead of sgd with nesterov, has proved to be quite valuable. The feedforward model now trains to "good sample" level within about 45 minutes. The current code is here https://github.com/kastnerkyle/ift6266h15

However, the convolutional model takes 3 days! Something might be wrong...

Original:

Samples from the feedforward model:

Reconstructions from feedforward:

Original:

Samples from the convolutional model:

Reconstructions from the convolutional model:

However, the convolutional model takes 3 days! Something might be wrong...

Original:

Samples from the feedforward model:

Reconstructions from feedforward:

Original:

Samples from the convolutional model:

Reconstructions from the convolutional model:

## Sunday, April 12, 2015

### IFT6266 Week 10

This week was largely spent on presentations and getting ready for the last push before April 20th.

Semi supervised (feedforward) VAE will probably be my last topic. The model I hope to use will take the label and concatenate following the code layer which should allow the model to mix this information in during reconstruction. This means that it should possible to sample the code layer and clamp the label to "ground truth" or chosen label, and get examples of the generated class. It should also be possible to feed in unlabeled X and generate Y'. The cost would then be nll + KL + indicator {labeled, notlabeled} * softmax error.

This can be seen as two separate models that share parameters - a standard classifier from X to Y, predicting Y', and a VAE from X to X' where the sampled code layer is partially clamped. This may require adding another KL term, but I hope it will be sufficient to train the softmax penalty using the available labeled data. In the limit of no labels, this should devolve back into standard VAE with KL evaluated on only *part* of the code layer, which may not be ideal. The softmax parameters of the white box may be more of a problem than I am anticipating.

This model departs somewhat from others in the literature (to my knowledge), so there may be a flaw in this plan.

Diagram:

Semi supervised (feedforward) VAE will probably be my last topic. The model I hope to use will take the label and concatenate following the code layer which should allow the model to mix this information in during reconstruction. This means that it should possible to sample the code layer and clamp the label to "ground truth" or chosen label, and get examples of the generated class. It should also be possible to feed in unlabeled X and generate Y'. The cost would then be nll + KL + indicator {labeled, notlabeled} * softmax error.

This can be seen as two separate models that share parameters - a standard classifier from X to Y, predicting Y', and a VAE from X to X' where the sampled code layer is partially clamped. This may require adding another KL term, but I hope it will be sufficient to train the softmax penalty using the available labeled data. In the limit of no labels, this should devolve back into standard VAE with KL evaluated on only *part* of the code layer, which may not be ideal. The softmax parameters of the white box may be more of a problem than I am anticipating.

This model departs somewhat from others in the literature (to my knowledge), so there may be a flaw in this plan.

Diagram:

## Saturday, April 4, 2015

### IFT6266 Week 9

This week I recoded a basic feedforward VAE, using batch normalization at every layer. There are still some small plumbing issues related to calculating fixed point statistics but I am hoping to solve those soon.

Random samples from Z:

Adding BN to VAE appears to make it much easier to train. I am currently using standard SGD with nesterov momentum, and it is working quite well. Before adding batch normalization no-one (to my knowledge) had been able to train a VAE using MLP encoders and decoders, on real valued MNIST, with a Gaussian prior. A tiny niche to be sure, but one I am happy to have succeeded in!

Source image:

Reconstructed:

Random samples from Z:

I am currently finalizing a convolutional VAE (as seen in my early posts) with the addition of batch normalization. If this network performs as well as before, I plan to extend to semi-supervised learning either with the basic VAE or the convolutional one to finish the course.

## Thursday, March 26, 2015

## Wednesday, March 25, 2015

### IFT6266 Week 8

After looking at batch normalization, I really think that the gamma and beta terms are correcting for the bias in the minibatch estimates of mean and variance, but have not confirmed. I am also toying with ideas along the same lines as Julian, except using reinforcement learning to choose the optimal minibatch that gives the largest expected reduction in training or validation error rather than controlling hyperparameters as he is doing. One possible option would be something like CACLA for real valued things, and LSPI for "switches".

Batch normalization (and nesterov momentum) seem to help. After only 11 epochs, an ~50% smaller network is able to reach equivalent validation performance.

Epoch 11

Train Accuracy 0.874000

Valid Accuracy 0.802000

Loss 0.364031

The code for the batch normalization layer is here:

https://github.com/kastnerkyle/ift6266h15/blob/master/normalized_convnet.py#L46

With the same sized network as before, things stay pretty consistently around 80% but begin to massively overfit. The best validation scores, with .95 nesterov momentum are:

Epoch 10

Train Accuracy 0.875050

Valid Accuracy 0.813800

Loss 0.351992

Epoch 36

Train Accuracy 0.967650

Valid Accuracy 0.815800

Epoch 96

Train Accuracy 0.992100

Valid Accuracy 0.822000

I next plan to try batch normalization on fully connected and convolutional VAE. First trying with MNIST, then LFW, then probably cats and dogs. It would be nice to try to either a) mess with batch normalization and do it properly *or* simplify the equations somehow b) do some reinforcement learning like Julian is doing, but on the minibatch selection process. However, time is short!

Batch normalization (and nesterov momentum) seem to help. After only 11 epochs, an ~50% smaller network is able to reach equivalent validation performance.

Epoch 11

Train Accuracy 0.874000

Valid Accuracy 0.802000

Loss 0.364031

The code for the batch normalization layer is here:

https://github.com/kastnerkyle/ift6266h15/blob/master/normalized_convnet.py#L46

With the same sized network as before, things stay pretty consistently around 80% but begin to massively overfit. The best validation scores, with .95 nesterov momentum are:

Epoch 10

Train Accuracy 0.875050

Valid Accuracy 0.813800

Loss 0.351992

Epoch 36

Train Accuracy 0.967650

Valid Accuracy 0.815800

Epoch 96

Train Accuracy 0.992100

Valid Accuracy 0.822000

I next plan to try batch normalization on fully connected and convolutional VAE. First trying with MNIST, then LFW, then probably cats and dogs. It would be nice to try to either a) mess with batch normalization and do it properly *or* simplify the equations somehow b) do some reinforcement learning like Julian is doing, but on the minibatch selection process. However, time is short!

## Thursday, March 12, 2015

### IFT6266 Week 7

With a 50% probability of introducing a horizontal flip, the network gets very close to passing the 80% threshold with early stopping.

Epoch 93

Train Accuracy 0.951350

Valid Accuracy 0.794200

Epoch 93

Train Accuracy 0.951350

Valid Accuracy 0.794200

With the addition of per pixel mean and standard deviation normalization the network successfully gets over 80% validation accuracy (barely)!

Epoch 74

Train Accuracy 0.878000

Valid Accuracy 0.803800

The working version of the network can be seen here:

https://github.com/kastnerkyle/ift6266h15/blob/master/convnet.py

Note that addition of ZCA did not seem to help in this case!

Train Accuracy 0.878000

Valid Accuracy 0.803800

The working version of the network can be seen here:

https://github.com/kastnerkyle/ift6266h15/blob/master/convnet.py

Note that addition of ZCA did not seem to help in this case!

With these changes, I will move on to implementing other models/ideas. First up, batch normalization, then spatially sparse convolution and possibly fractional maxpooling.

Subscribe to:
Posts (Atom)