Life is a game, take it seriously

Archive for June, 2013|Monthly archive page

Visual Illusion: Chronostasis and Saccadic Masking

In Computer Vision, Neural Science, Visual Illusion on June 26, 2013 at 9:54 pm

by Gooly (Li Yang Ku)

some visual art to attract your attention, has little to do with the post

I was always intrigued by visual illusions and am often surprised by how often we are fooled by our eyes. Some visual illusion is just as good as a good joke. One of my favorite illusion is the spinning dancer, which I can’t easily change the direction I interpret despite knowing it could be both. Understanding visual illusions is also crucial in Computer Vision because they are just side effects produced by the underlying algorithm that helps us see. A great vision algorithm should probably have the same visual illusion as humans do.

Spinning Dancer illusion

Chronostasis is a kind of visual illusion that occurs to you every moment without you noticing. To test it out, you need to find a clock that has a seconds hand; first focus your gaze on some where close so that you can still see the hand ticking from the side view but not too close, then shift your gaze to the seconds hand when it just moved. You’ll notice that the first tick seems to be longer than the other ticks after it.


This illusion is caused by Saccadic Masking, a mechanism that our brain uses to help us see the world without getting dizzy. Our eyes are constantly moving and our head also turns a lot. Saccadic masking shuts down the input when the scenes that shown to your eyes are blurry. So when you move your eyes, the brain has two choices, it can either keep the last image or show you the next stable image in the future. So now you might be yelling “HOW COULD THE BRAIN POSSIBLY SHOW YOU THE IMAGE IT HAVEN’T SEEN!” Yeah, that’s not possible. But remember that there is no clock ticking in your brain and time is just what you feel; so your brain can just freeze your internal clock and wait for the next image then fast forward your internal clock so it syncs back with the real world. And that’s what happened to you when you did that first gaze shift to the seconds hand.


To test out Saccadic Masking you can also find a mirror and stare at your pretty (or nerdy) eyes. First focus on your left eyes, then shift your gaze to your right eye. You won’t be able to see your own eyes saccade because of Saccadic Masking, but if you record yourself doing the same experiment with a smartphone’s forward facing camera, you would be able to see your eyes saccade clearly. (note that smart phone cameras have time delays, so don’t use them as a mirror for testing. It is highly recommended to be used as a mirror outside of the experiment though; it always shows a slightly younger you.)

Hands on Capri: building an OpenNI2 ROS interface

In Computer Vision, Kinect on June 9, 2013 at 10:57 am

by Gooly (Li Yang Ku)

point cloud openni2

Luckily because of my day job I am able to put my hands on the new PrimeSense sensor Capri. According to their website this gum-stick size gadget is unlikely to be on sale individually anytime soon. If you haven’t been working on any Kinect like device before, PrimeSense is the company behind the old Kinect, Asus Xtion and OpenNI. The new sensor is a shrink version of the original Kinect sensor intended to be embedded in tablets.

Since this sensor can only work with OpenNI 2.2 I am forced to come out of my OpenNI 1 comfort zone and do the upgrade on my Ubuntu machine. ( I am still using ROS fuerte though, life is full of procrastination) To tell the truth, switching is easy; the new OpenNI 2 uses different library names so I have no problem keeping my old code and OpenNI 1 working while testing the new OpenNI 2.

capri sensor + OpenNI + ROS

To have your sensor (xtion, kinect, capri) working with OpenNI2 on Ubuntu, download the OpenNI sdk. (Last week when I tested it, 2.2 only worked for capri and 2.1 only worked for xtion. I am not sure why the 2.1 download doesn’t show up on the website now; they probably updated 2.2 so it works for both now.) Extract the zip file and open the folder till you see the install file, then do “sudo ./install” in the terminal. You can now test tools such as SimpleRead under the sample folder or NIViewer under the tools folder. (Note that Capri only works with SimpleRead when I tested it)

To get Capri working with my old ROS (Robotic Operating System) stuff I build a ros package that publishes a depth image and point cloud using the OpenNI 2 interface. This ROS interface should work on all Primesense Sensors not just Capri. The part worth pointing out is the CMakeLists.txt file in the ros package. The following is what I added to link to the right library.

cmake_minimum_required(VERSION 2.4.6)

set(OPENNI2_DIR "{location of OpenNI2 library}



rosbuild_add_executable(capri src/sensor.cpp)
target_link_libraries(capri boost_filesystem boost_system OpenNI2)

sensor.cpp is the file that reads in the depth information and publishes depth image and point cloud. I simply modified the SimpleRead code that comes with the OpenNI 2 library (under the Sample folder). If you want to convert a depth image into a point cloud, check out the convertDepthToWorld function. 

CoordinateConverter::convertDepthToWorld(depth, i, j, 
pDepth[index], &px, &py, &pz);

The top image in this post is the point cloud I published to RVIZ. The depth image is 320×240 by default.

Cats and Vision: is vision acquired or innate?

In Computer Vision, Neural Science on June 1, 2013 at 3:40 pm

by Gooly (Li Yang Ku)

cat experiment

Cats contributed a lot to the development of internet, but they also play an important role in the field of understanding vision. Around the 60’s scientist started a series of researches on understanding  how brains process visual inputs from eye. Cats, which have a relative sharp vision, were their major subjects.

One of the series of researches that lead to a Nobel Prize was done by Hubel and Wiesel. In 1958 they accidentally discovered neurons in Cats that respond to edges with different orientation; they named these neurons “simple cells“. These neurons was later discovered to be in orientation columns and has profound impact on many computer vision algorithms such as the Gabor filter.

The following video explains how the experiment was made. They first implanted a sensor in a certain area in the cat’s visual cortex. The cat’s head is then fixed to face the projector screen. Different patterns are shown to the cat while the electrode sensor records the response. (The cat is anesthetized during the whole experiment.)

A further experiment was done by Hubel and Wiesel to understood whether the ability to see is innate or acquired. The experiment is done by suturing one of the eyes of a newborn kitten and reopen it after a certain period. Surprisingly kittens with one eye deprived of vision for the first 3 month remain blind on that eye for their whole life.

cat experiment

Another cat experiment done by Blakemore and Cooper gave an even clearer result. Two special cylinders are made, one with only vertical stripes inside and the other with only horizontal stripes. Newborn kittens are placed in one of the cylinders the first few month. Kittens that only perceive vertical lines for the first few month of birth could only see vertical lines not horizontal lines for the rest of their life. The following video explains more in detail.

These experiments show that vision from the most basic line detection to complex scene recognition are all learned and largely depend on the environment one experienced.


Hubel, David H., and Torsten N. Wiesel. “Receptive fields of single neurones in the cat’s striate cortex.” The Journal of physiology 148.3 (1959): 574-591.

Hubel, David H., and Torsten N. Wiesel. “The period of susceptibility to the physiological effects of unilateral eye closure in kittens.” The Journal of physiology 206.2 (1970): 419.

Blakemore, Colin, and Grahame F. Cooper. “Development of the brain depends on the visual environment.” (1970): 477-478.