Life is a game, take it seriously

Posts Tagged ‘openni’

Hands on Capri: building an OpenNI2 ROS interface

In Computer Vision, Kinect on June 9, 2013 at 10:57 am

by Gooly (Li Yang Ku)

point cloud openni2

Luckily because of my day job I am able to put my hands on the new PrimeSense sensor Capri. According to their website this gum-stick size gadget is unlikely to be on sale individually anytime soon. If you haven’t been working on any Kinect like device before, PrimeSense is the company behind the old Kinect, Asus Xtion and OpenNI. The new sensor is a shrink version of the original Kinect sensor intended to be embedded in tablets.

Since this sensor can only work with OpenNI 2.2 I am forced to come out of my OpenNI 1 comfort zone and do the upgrade on my Ubuntu machine. ( I am still using ROS fuerte though, life is full of procrastination) To tell the truth, switching is easy; the new OpenNI 2 uses different library names so I have no problem keeping my old code and OpenNI 1 working while testing the new OpenNI 2.

capri sensor + OpenNI + ROS

To have your sensor (xtion, kinect, capri) working with OpenNI2 on Ubuntu, download the OpenNI sdk. (Last week when I tested it, 2.2 only worked for capri and 2.1 only worked for xtion. I am not sure why the 2.1 download doesn’t show up on the website now; they probably updated 2.2 so it works for both now.) Extract the zip file and open the folder till you see the install file, then do “sudo ./install” in the terminal. You can now test tools such as SimpleRead under the sample folder or NIViewer under the tools folder. (Note that Capri only works with SimpleRead when I tested it)

To get Capri working with my old ROS (Robotic Operating System) stuff I build a ros package that publishes a depth image and point cloud using the OpenNI 2 interface. This ROS interface should work on all Primesense Sensors not just Capri. The part worth pointing out is the CMakeLists.txt file in the ros package. The following is what I added to link to the right library.

cmake_minimum_required(VERSION 2.4.6)

set(OPENNI2_DIR "{location of OpenNI2 library}



rosbuild_add_executable(capri src/sensor.cpp)
target_link_libraries(capri boost_filesystem boost_system OpenNI2)

sensor.cpp is the file that reads in the depth information and publishes depth image and point cloud. I simply modified the SimpleRead code that comes with the OpenNI 2 library (under the Sample folder). If you want to convert a depth image into a point cloud, check out the convertDepthToWorld function. 

CoordinateConverter::convertDepthToWorld(depth, i, j, 
pDepth[index], &px, &py, &pz);

The top image in this post is the point cloud I published to RVIZ. The depth image is 320×240 by default.

Recording 3D video(oni) files that align rgb image with depth image

In Computer Vision, Point Cloud Library on July 15, 2012 at 10:51 am

by Gooly (Li Yang Ku)

Kinect or xtion like devices provide an easy way to capture 3D depth image or videos. The OpenNI interface that is compatible with these devices comes with a handy tool “NiViewer” that captures 3D video into an oni file. The tool is located under “Samples/Bin/x64-Release/NiViewer” for Linux; and should be in the startup menu if you use Windows. After starting up the tool, you can right click and show the menu. By clicking “Start Capture” and then “Stop” should generate an oni video file in the same folder.

However the rgb video and depth video in the oni file are recorded seperately and not aligned. (Turns out it should be aligned if you press 9, but it didn’t work on my machine, see comment) This is due to the camera position difference between the IR sensor and the rgb sensor. OpenNI do provide functions to adjust the depth image to match the rgb image (Note that it is not doable the other way around). By adding some additional code to the NiViewer, you should be able to record depth video that is aligned with the rgb image.

First open the file “src/NiViewer/Capture.cpp”; change the code that adds the depth node under the “captureFrame()” function to the following.

nRetVal = g_Capture.pRecorder->AddNodeToRecording(*getDepthGenerator(), g_Capture.nodes[CAPTURE_DEPTH_NODE].captureFormat);
START_CAPTURE_CHECK_RC(nRetVal, "add depth node");
g_Capture.nodes[CAPTURE_DEPTH_NODE].bRecording = TRUE;
DepthGenerator* depth = getDepthGenerator();
depth->SetIntProperty ("RegistrationType", 1);
nRetVal = depth->GetAlternativeViewPointCap().SetViewPoint(*getImageGenerator());
if(XN_STATUS_OK != nRetVal)
	depth->SetIntProperty ("RegistrationType", 2);
	nRetVal = depth->GetAlternativeViewPointCap().SetViewPoint(*getImageGenerator());
	if(XN_STATUS_OK != nRetVal)
		displayMessage("Getting and setting AlternativeViewPoint failed");
g_Capture.nodes[CAPTURE_DEPTH_NODE].pGenerator = depth;

Then type make in the terminal under the NiViewer folder. The new NiViewer binary should be generated.
Kinect and xtion sensor uses different methods to generate this alternative view depth image. Kinect does the adjustment in software and xtion does it in hardware. This is why different “RegistrationType” are set in the code above.