If you read my last post, you know I am working on a social app; it turned out that the social app didn’t work as we imagined due to some false assumptions we made; so we came up with a slightly different idea and is still testing it. In the mean time, I decided to post some vision work I made .
The goal of this project is to recognize objects with limited training images even under slightly different angle. Only using a few images has a lot of advantages, specially for researcher that is lazy of collecting images and don’t have the patience to wait for several hours or days of training. The concept is simple, we look at the 4 training images we only have, and try to find what is common. Then we take the common structure and appearance and make them into a model.
So first, in order to be rotation and scale invariant we find the SURF points of all training.
We build the structure by combining SURF points into a chain of triangles using dynamic programming. And then it is done. For test image, simply match the model to it’s SURF points. The results are fairly good on different objects and different angles. You can download the full paper here (A Probabilistic Model for Object Matching).