Life is a game, take it seriously

Posts Tagged ‘kinect’

Creating 3D mesh models using Asus xtion with RGBDemo and Meshlab on Ubuntu 12.04

In Computer Vision, Kinect on March 12, 2014 at 5:15 pm

by Li Yang Ku (Gooly)

Big-Bang-Theory-Kinect-3D-Scanner

Creating 3D models simply by scanning an object using low cost sensors is something that sounds futuristic but isn’t. Although models scanned with a Kinect or Asus xtion aren’t as pretty as CAD models nor laser scanned models, they might actually be helpful in robotics research. A not so perfect model scanned by the same sensor on the robot is closer to what the robot perceives. In this post I’ll go through the steps on creating a polygon mesh model from scanning a coke can using the xtion sensor. The steps are consist of 3 parts: compiling RGBDemo, scanning the object, and converting scanned vertices to a polygon mesh in Meshlab.

RGBDemo

RGBDemo is a great piece of opensource software that can help you scan objects into a single ply file with the help of some AR-tags. If you are using a Windows machine, running the compiled binary should be the easiest way to get started. However if you are running on an Ubuntu machine, the following are the steps I did. (I had compile errors following the official instruction, but still might worth a try)

  1. Make sure you have OpenNI installed. I use the old version OpenNI instead of OpenNI2. See my previous post about installing OpenNI on Ubuntu if you haven’t.
  2. Make sure you have PCL and OpenCV installed. For PCL I use the one that comes with ROS (ros-fuerte-pcl) and for OpenCV I have libcv2.3 installed.
  3. Download RGBDemo from Github https://github.com/rgbdemo/rgbdemo.
    git clone --recursive https://github.com/rgbdemo/rgbdemo.git
  4. Modify the file linux_configure.sh under the rgbdemo folder. Add the following line among the other options so that it won’t use OpenNI2.
        
        -DCMAKE_VERBOSE_MAKEFILE=1 \
        -DNESTK_USE_OPENNI2=0 \
        $*
  5. Modify rgbdemo/scan-markers/ModelAcquisitionWindow.cpp. Comment out line 57 to 61. (For compile error: ‘const class ntk::RGBDImage’ has no member named ‘withDepthDataAndCalibrated’)
        
        void ModelAcquisitionWindow::on_saveMeshButton_clicked()
        {
            //if (!m_controller.modelAcquisitionController()->currentImage().withDepthDataAndCalibrated())
            //{
                //ntk_dbg(1) << "No image already processed.";
                //return;
            //}
    
            QString filename = QFileDialog::getSaveFileName
  6. cmake and build
    ./linux_configure.sh
    ./linux_build.sh
  7. The binary files should be built under build/bin/.

turtle mesh

To create a 3D mesh model, we first capture a model (PLY file) that only consists of vertices using RGBDemo.

  1. Print out the AR tags located in the folder ./scan-markers/data/, stick them on a flat board such that the numbers are close to each other. Put your target object on the center of the board.
  2. Run the binary ./build/bin/rgbd-scan-markers
  3. Two windows should pop out, RGB-D Capture and 3D View. Point the camera toward the object on the board and click “Add current frame” in the 3D view window. Move the camera around the object to fill the missing pieces of the model.
  4. Click on the RGB-D Capture window and click Capture->pause in the menu top of the screen. Click “Remove floor plane” in the 3D View Window to remove most of the board.
  5. Click “Save current mesh” to save the vertices into a ply file.

meshlab

The following steps convert the model captured from RGBDemo to a 3D mesh model in MeshLab (MeshLab can be installed through Ubuntu Software Center).

  1. Import the ply file created in the last section.
  2. Remove unwanted vertices in the model. (select and delete, let me know if you can’t figure out how to do this)
  3. Click on “Filters ->Point Set -> Surface Reconstruction: Poisson”. This will pop up a dialog, apply the default setting will generate a mesh that has an estimated surface. If you check “View -> show layer dialog” you should be able to see two layers, the original and the new constructed mesh.
  4. To transfer color to the new mesh click “Filters -> Sampling -> Vertex Attribute Transfer”. Select mesh.ply as source and poisson mesh as target. This should transfer the colors on the vertices to the mesh.
  5. Note that MeshLab has some problem when saving to the collada(dae) format.

Book it: OpenNI Cookbook

In Book It, Computer Vision, Kinect, Point Cloud Library on November 20, 2013 at 7:53 pm

by Li Yang Ku (Gooly)

OpenNI Cookbook

 

I was asked to help review a technical book “OpenNI Cookbook” about the OpenNI library for Kinect like sensors recently. This is the kind of book that would be helpful if you just started developing OpenNI applications in Windows. Although I did all my OpenNI researches in Linux, it was mostly because I need it to work with robots that use ROS (Robotic Operating System), which was only supported in Ubuntu. OpenNI was always more stable and more supported in Windows than in Linux. However, if you plan to use PCL (Point Cloud Library) with OpenNI, you might still want to consider Linux.

OpenNI Skeleton Tracking

The book contains topics from basic to advance applications such as getting the raw sensor data, hand tracking and skeleton tracking. It also contains sections that people don’t usually talk about but crucial for actual software development such as listening to connect and disconnect events. The code in this book uses the OpenNI2 library, which is the latest version of OpenNI. Note that although OpenNI is opensource, the NITE library for hand tracking and human tracking used in the book isn’t. (But free under certain license)

You can also buy the book on Amazon.

 

 

Hands on Capri: building an OpenNI2 ROS interface

In Computer Vision, Kinect on June 9, 2013 at 10:57 am

by Gooly (Li Yang Ku)

point cloud openni2

Luckily because of my day job I am able to put my hands on the new PrimeSense sensor Capri. According to their website this gum-stick size gadget is unlikely to be on sale individually anytime soon. If you haven’t been working on any Kinect like device before, PrimeSense is the company behind the old Kinect, Asus Xtion and OpenNI. The new sensor is a shrink version of the original Kinect sensor intended to be embedded in tablets.

Since this sensor can only work with OpenNI 2.2 I am forced to come out of my OpenNI 1 comfort zone and do the upgrade on my Ubuntu machine. ( I am still using ROS fuerte though, life is full of procrastination) To tell the truth, switching is easy; the new OpenNI 2 uses different library names so I have no problem keeping my old code and OpenNI 1 working while testing the new OpenNI 2.

capri sensor + OpenNI + ROS

To have your sensor (xtion, kinect, capri) working with OpenNI2 on Ubuntu, download the OpenNI sdk. (Last week when I tested it, 2.2 only worked for capri and 2.1 only worked for xtion. I am not sure why the 2.1 download doesn’t show up on the website now; they probably updated 2.2 so it works for both now.) Extract the zip file and open the folder till you see the install file, then do “sudo ./install” in the terminal. You can now test tools such as SimpleRead under the sample folder or NIViewer under the tools folder. (Note that Capri only works with SimpleRead when I tested it)

To get Capri working with my old ROS (Robotic Operating System) stuff I build a ros package that publishes a depth image and point cloud using the OpenNI 2 interface. This ROS interface should work on all Primesense Sensors not just Capri. The part worth pointing out is the CMakeLists.txt file in the ros package. The following is what I added to link to the right library.

cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)

set(OPENNI2_DIR "{location of OpenNI2 library}
/Linux-x64/OpenNI-Linux-x64-2.2.0/")
rosbuild_init()

set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)

link_directories("${OPENNI2_DIR}/Redist/")
include_directories("${OPENNI2_DIR}/Include")

rosbuild_add_executable(capri src/sensor.cpp)
target_link_libraries(capri boost_filesystem boost_system OpenNI2)

sensor.cpp is the file that reads in the depth information and publishes depth image and point cloud. I simply modified the SimpleRead code that comes with the OpenNI 2 library (under the Sample folder). If you want to convert a depth image into a point cloud, check out the convertDepthToWorld function. 

CoordinateConverter::convertDepthToWorld(depth, i, j, 
pDepth[index], &px, &py, &pz);

The top image in this post is the point cloud I published to RVIZ. The depth image is 320×240 by default.

Recording 3D video(oni) files that align rgb image with depth image

In Computer Vision, Point Cloud Library on July 15, 2012 at 10:51 am

by Gooly (Li Yang Ku)

Kinect or xtion like devices provide an easy way to capture 3D depth image or videos. The OpenNI interface that is compatible with these devices comes with a handy tool “NiViewer” that captures 3D video into an oni file. The tool is located under “Samples/Bin/x64-Release/NiViewer” for Linux; and should be in the startup menu if you use Windows. After starting up the tool, you can right click and show the menu. By clicking “Start Capture” and then “Stop” should generate an oni video file in the same folder.

However the rgb video and depth video in the oni file are recorded seperately and not aligned. (Turns out it should be aligned if you press 9, but it didn’t work on my machine, see comment) This is due to the camera position difference between the IR sensor and the rgb sensor. OpenNI do provide functions to adjust the depth image to match the rgb image (Note that it is not doable the other way around). By adding some additional code to the NiViewer, you should be able to record depth video that is aligned with the rgb image.

First open the file “src/NiViewer/Capture.cpp”; change the code that adds the depth node under the “captureFrame()” function to the following.

nRetVal = g_Capture.pRecorder->AddNodeToRecording(*getDepthGenerator(), g_Capture.nodes[CAPTURE_DEPTH_NODE].captureFormat);
START_CAPTURE_CHECK_RC(nRetVal, "add depth node");
g_Capture.nodes[CAPTURE_DEPTH_NODE].bRecording = TRUE;
DepthGenerator* depth = getDepthGenerator();
depth->SetIntProperty ("RegistrationType", 1);
nRetVal = depth->GetAlternativeViewPointCap().SetViewPoint(*getImageGenerator());
if(XN_STATUS_OK != nRetVal)
{
	depth->SetIntProperty ("RegistrationType", 2);
	nRetVal = depth->GetAlternativeViewPointCap().SetViewPoint(*getImageGenerator());
	if(XN_STATUS_OK != nRetVal)
	{
		displayMessage("Getting and setting AlternativeViewPoint failed");
	}
}
g_Capture.nodes[CAPTURE_DEPTH_NODE].pGenerator = depth;

Then type make in the terminal under the NiViewer folder. The new NiViewer binary should be generated.
Kinect and xtion sensor uses different methods to generate this alternative view depth image. Kinect does the adjustment in software and xtion does it in hardware. This is why different “RegistrationType” are set in the code above.