Traffic Sign Classifier

The second project in the self-driving car nanodegree was to build a pipeline that can classify images showing traffic signs.

We were given a dataset of about 34000 images, each showing one of 43 classes of German traffic signs.

I extended the dataset with translated and blurred versions of some of them, this slightly enhanced the accuracy of the classification.

Many other techniques could have been used to have a more robust classifier. For instance, one could build a whole image transformation pipeline with imgaug or Augmentor, two Python libraries built exactly for that purpose. This could include randomly rotating the image, adding noise, zooming, squeezing, and many other techniques.

The structure of the neural network I used was almost identical to the popular LeNet architecture. LeNet was originally designed to recognize hand-written digits, and it does its job impressively well. Check out the unusual and weird digits it can recognize - even though no similar examples were present in the training set.

The project was a bit tedious, the process used for augmenting the images took quite some time. Tweaking the parameters and doing the actual training had to be done several times, and it took around a half an hour each time, even on the powerful AWS GPU instance that I used. A lot of trial-and-error and patience were required.

I am not very satisfied with my submission as it could really benefit from some tender loving care - further augmentation of the training data, adding more layers to the neural network, and fine-tuning the parameters.


Resources:

Written on November 25, 2017

If you notice anything wrong with this post (factual error, rude tone, bad grammar, typo, etc.), and you feel like giving feedback, please do so by contacting me at samubalogh@gmail.com. Thank you!