Bayesian Ensembling: Letter Classification

Bayesian Ensembling: Letter Classification

Machine learning algorithms sometimes feel a bit like using a fork-lift to hang a picture. Sure it can work, but you have to be careful, like, really careful. This project was my first substantive foray into machine learning models for image data. Truly, a fork-lift-worthy task. The overall goal was to compare a bunch of models (KNN, random forest, SVM, etc.) for classifying handwritten letters. The real takeaway though was an ensembling method that my partner and I came up with to combine models. The idea is to use a Bayesian approach to weight predictions given each model's confusion matrix. We ended up smashing the competition in terms of accuracy, and I also learned a lot about how better suited ML models are for classification tasks than traditional statistical methods.