Viewing Computer Vision from a Bigger Picture

written by gooly (Li Yang Ku)

It’s easy to get lost when you are doing computer vision research, especially when you are deep in the codes tweaking parameters trying to improve while keeping balance. When you find yourself doing this for more than a half day, it’s probably a good time to lay back and look at the big picture.

For most computer vision problems, say object recognition, the quest is actually just trying to put data into different bins. Say we have a 200 by 200 grey image, we can just look at it as a point in a 200*200 = 40000 dimension space.  The problem now would be how to classify these points into different bins. In the paper “Origins of Scaling in Natural Images”, Ruderman shown that natural images have some common frequency spectrum. This suggests that natural images lies in a much smaller sub group in this immense dimension space. If we are able to map this high dimension point into a lower dimension while only throwing away uninformative data, we would be able to classify it easier.

Most of the vision work resides in this part, taking a high dimension data and turn it into a lower dimension data. SIFT points, HOG, SURF, and nameless researches are just doing this. Trying to find the lower dimension data that tells most.

And then we head to the second step where we have to classify these lower dimension data. It could be as simple as nearest neighbor, probability compared with your trained model or any machine learning algorithms such as Adaboost, SVM, neural network, etc. This step classifies all these points into different categories.

So back to where you were tweaking some magic parameters; what you are actually doing is probably slightly changing the sub space your images are mapped to, or throwing these points into bins slightly differently. So just take it easy, if it only works when you tweak it a lot, you are probably mapping to the wrong space or throwing points the wrong way.


Posted

in

,

by

Tags:

Comments

4 responses to “Viewing Computer Vision from a Bigger Picture”

  1. […] Most of the vision work take a high dimension data and turn it into a lower dimension data while only throwing away uninformative data, with the aim of classifying it easier. SIFT points, HOG, SURF are just doing this – Trying to find the lower dimension data that tells most. […]

    Like

  2. Paper Talk: Untangling Invariant Object Recognition | the Serious Computer Vision Blog Avatar

    […] one of my previous post I talked about the big picture of object recognition, which can be divided into two parts 1) […]

    Like

  3. Deep Learning and Convolutional Neural Networks | the Serious Computer Vision Blog Avatar

    […] Learning is that it bundles feature detection and classification. Traditional approaches, which I have talked about in my past post, usually consist of two parts, a feature detector such as the SIFT detector and a classifier such […]

    Like

  4. Machine Learning, Computer Vision, and Robotics | the Serious Computer Vision Blog Avatar

    […] how I see the relations between these three fields with my limited knowledge. Like in my previous post, its sometimes good to just sit back and look at the big […]

    Like

Leave a comment