Life is a game, take it seriously

Creating 3D mesh models using Asus xtion with RGBDemo and Meshlab on Ubuntu 12.04

In Computer Vision, Kinect on March 12, 2014 at 5:15 pm

by Li Yang Ku (Gooly)

Big-Bang-Theory-Kinect-3D-Scanner

Creating 3D models simply by scanning an object using low cost sensors is something that sounds futuristic but isn’t. Although models scanned with a Kinect or Asus xtion aren’t as pretty as CAD models nor laser scanned models, they might actually be helpful in robotics research. A not so perfect model scanned by the same sensor on the robot is closer to what the robot perceives. In this post I’ll go through the steps on creating a polygon mesh model from scanning a coke can using the xtion sensor. The steps are consist of 3 parts: compiling RGBDemo, scanning the object, and converting scanned vertices to a polygon mesh in Meshlab.

RGBDemo

RGBDemo is a great piece of opensource software that can help you scan objects into a single ply file with the help of some AR-tags. If you are using a Windows machine, running the compiled binary should be the easiest way to get started. However if you are running on an Ubuntu machine, the following are the steps I did. (I had compile errors following the official instruction, but still might worth a try)

  1. Make sure you have OpenNI installed. I use the old version OpenNI instead of OpenNI2. See my previous post about installing OpenNI on Ubuntu if you haven’t.
  2. Make sure you have PCL and OpenCV installed. For PCL I use the one that comes with ROS (ros-fuerte-pcl) and for OpenCV I have libcv2.3 installed.
  3. Download RGBDemo from Github https://github.com/rgbdemo/rgbdemo.
    git clone --recursive https://github.com/rgbdemo/rgbdemo.git
  4. Modify the file linux_configure.sh under the rgbdemo folder. Add the following line among the other options so that it won’t use OpenNI2.
        
        -DCMAKE_VERBOSE_MAKEFILE=1 \
        -DNESTK_USE_OPENNI2=0 \
        $*
  5. Modify rgbdemo/scan-markers/ModelAcquisitionWindow.cpp. Comment out line 57 to 61. (For compile error: ‘const class ntk::RGBDImage’ has no member named ‘withDepthDataAndCalibrated’)
        
        void ModelAcquisitionWindow::on_saveMeshButton_clicked()
        {
            //if (!m_controller.modelAcquisitionController()->currentImage().withDepthDataAndCalibrated())
            //{
                //ntk_dbg(1) << "No image already processed.";
                //return;
            //}
    
            QString filename = QFileDialog::getSaveFileName
  6. cmake and build
    ./linux_configure.sh
    ./linux_build.sh
  7. The binary files should be built under build/bin/.

turtle mesh

To create a 3D mesh model, we first capture a model (PLY file) that only consists of vertices using RGBDemo.

  1. Print out the AR tags located in the folder ./scan-markers/data/, stick them on a flat board such that the numbers are close to each other. Put your target object on the center of the board.
  2. Run the binary ./build/bin/rgbd-scan-markers
  3. Two windows should pop out, RGB-D Capture and 3D View. Point the camera toward the object on the board and click “Add current frame” in the 3D view window. Move the camera around the object to fill the missing pieces of the model.
  4. Click on the RGB-D Capture window and click Capture->pause in the menu top of the screen. Click “Remove floor plane” in the 3D View Window to remove most of the board.
  5. Click “Save current mesh” to save the vertices into a ply file.

meshlab

The following steps convert the model captured from RGBDemo to a 3D mesh model in MeshLab (MeshLab can be installed through Ubuntu Software Center).

  1. Import the ply file created in the last section.
  2. Remove unwanted vertices in the model. (select and delete, let me know if you can’t figure out how to do this)
  3. Click on “Filters ->Point Set -> Surface Reconstruction: Poisson”. This will pop up a dialog, apply the default setting will generate a mesh that has an estimated surface. If you check “View -> show layer dialog” you should be able to see two layers, the original and the new constructed mesh.
  4. To transfer color to the new mesh click “Filters -> Sampling -> Vertex Attribute Transfer”. Select mesh.ply as source and poisson mesh as target. This should transfer the colors on the vertices to the mesh.
  5. Note that MeshLab has some problem when saving to the collada(dae) format.
  1. I’m excited to finally get this running on Ubuntu 14.04.2 LTS but also disappointed to realize the author(s) of RGBDemo went commercial and then stopped developing it.

  2. Any idea why I’m not getting a properly formed can when scanned? What seems to be happening is the rotation is adding data to the image but it’s offset by almost half the can diameter so when done it looks more like a pin-wheel than a can.

    I’ve tried putting the marker prints(8.5″x11″) side by side and tried placing the numbers on the printed pages on top of each other. Even set the printer to 100% and no scaling but that too didn’t change the outcome.

    • Does your board and marker look about the same size as the one shown in the post?

      • yes, they are square markers just a bit less than the height/length of a soup can. When placed edge to edge with the numbers towards each other it gives a somewhat rectangular shape with the 4 markers.

        I’m wondering, since it’s not a Kinect, does the angle to the board need to be set or fixed at something. I notice the app has a slider for the Kinect tilt but the Asus Xtion Pro Live does not have this.

      • I found NiTE on a .ru site, installed it and now Sample-NiHandTracker works and rgbdemo compiles and run! I had to add these to the Docker File before git’ing rgbdemo. I have not confirmed/tested them in the build script as I’ve only manually run them in the container:

        # install NiTE middleware
        RUN wget http://www.openni.ru/wp-content/uploads/2013/10/NITE-Bin-Linux-x64-v1.5.2.23.tar.zip
        RUN unzip NITE-Bin-Linux-x64-v1.5.2.23.tar.zip
        RUN rm NITE-Bin-Linux-x64-v1.5.2.23.tar.zip
        RUN tar -xf NITE-Bin-Linux-x64-v1.5.2.23.tar.bz2
        RUN cd NITE-Bin-Dev-Linux-x64-v1.5.2.23
        RUN ./install.sh

        ENV QT_X11_NO_MITSHM 1

    • The angle to the board is not the problem. I am not quite sure what went wrong but if you can borrow an old kinect and test it, you might be able to rule out hardware problems.

      • I will have access to a Kinect in 2 weeks but in the mean time I will try the calibration process and try to figure out what other tests I can do. Maybe put a known size marker on the floor and use the rgbd-reconstruct and then measure that marker in a MeshLab. Interestingly, rgbd-viewer pulls in OpenNI2 while rgbd-scan-markers and rgbd-reconstruct pull in OpenNI(v1.5.7).

    • When I did this post I only used openNI not openNI2. How about trying to only have one version of openNI on the computer. I remember having the right versions installed correctly was important on related stuff.

      • Instead of removing OpenNI2, I will see if something like Docker can be used to create a new installation of the entire process without creating a new Ubuntu OS installation on HD or in a VM. I’ve never used Docker before but if it works then there will be a single container/tar file which others can use instead of everyone needed to have all the development stuff loaded to build and then run and/or test.

    • Let me know how it works. Does the depth image in RGBD capture look right?

      • The dept image looks ok as far as I can tell. I get the rainbow colors starting with red being the closest and violet for distant objects.

      • I have NiViewer working out of the docker container from a bujild script but compiling rgbdemo is still elusive. I tried using most all the packages out of the ROS and while getting close it still fails in nestk.

        FWIW, here are two files I use to build the docker image and run the docker container:

        ( DockerFile = ubuDockerFile-OpenNI )

        FROM ubuntu

        # quiets down mesg to console about TERM not being set
        ENV TERM linux

        RUN apt-get update
        RUN apt-get install -qqy openssh-client wget git libusb-1.0-0-dev freeglut3-dev openjdk-7-jdk doxygen graphviz software-properties-common cmake build-essential
        RUN apt-get install -qqy xterm

        # getting ROS packages because they contain all the openni,opencv/pcl goodness needed
        RUN sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
        RUN wget http://packages.ros.org/ros.key -O – | apt-key add –
        RUN apt-get update
        RUN apt-get install -qqy ros-indigo-perception-pcl
        RUN apt-get install -qqy ros-indigo-vision-opencv

        # Sample-NiHandTracker does not work so I was attempting to use libfreenect
        # libfreenect drivers are what’s used on the laptop which works
        # complains: GestureGenerator.Viewer init failed
        #RUN apt-get install -qqy ros-indigo-libfreenect
        ##libfreenect-demos

        # copied my NiTE directories into the container to satisfy build complaints
        #scp -r x@X:/usr/include/nite /usr/include
        #scp -r x@X:/usr/lib/libXnNite* /usr/lib

        RUN add-apt-repository –yes ppa:xqms/opencv-nonfree
        RUN apt-get update
        RUN apt-get install -qqy libopencv-nonfree-dev

        # if UsbInterface is commented out, remove the comment char and flag(DJL) as changed
        RUN sed -i ‘/;UsbInterface=2/c\UsbInterface=2;DJL’ /etc/openni/GlobalDefaults.ini

        # get rgbdemo
        #RUN cd; git clone –recursive https://github.com/rgbdemo/rgbdemo.git
        #cd rgbdemo/
        # do stuff to edit linux_configure.sh and scan-marker/ModelAcquisitionWindow.cpp

        # get rules.d file
        #/etc/udev/rules.d/55-primesense-usb.rules

        ENV DISPLAY :0
        CMD xterm
        #CMD NiViewer

        The script I use to build docker image and run the docker container goes like this:
        #!/bin/bash

        #create the container using the Docker File ubuDockerFile-OpenNI
        #it’s based on the default ubunu container
        #
        sudo docker build -t openni – < ubuDockerFile-OpenNI

        # create the vars to pass to docker linking host file X0 to container file X0
        XSOCK=/tmp/.X11-unix/X0
        VBUS=/dev/bus/usb

        # runs the xterm X11 app in the container and opens an xterm to the docker container
        sudo docker run –privileged -v $XSOCK:$XSOCK -v $VBUS:$VBUS openni

  3. I have the same scanning problem in this docker container too but it looks like a calibration issue. I started a capture in the 3D View window but then zoomed in on the point cloud and noticed the text on the can was showing up on the flat surface and missing on some of the vertical surface area. If I aim the camera to the right of the marker board by about .5 meters I get a shape more like a can.

  4. My post got lost that I have it working from within the Docker container. I’ve spent the morning cleaning up the Dockerfile and the run-script and now I’m deciding on which blogging site to use to blog about this. I also have a number of other projects people are asking for/about so I want to pick the right one. I’m leaning towards WordPress.

  5. It was the Sensor driver version which was causing the problem. I had v5.1.0.? but when I found and used 5.1.6.6 the point cloud and RGB images were aligned.

    Now I need to find out why I get one initial good RGB image of the soup can but as I rotate it the RGB image data seems to go away. It looks like a get a decent point cloud but I don’t see where all the RGB image data is. I’m not too worried about this since now I can at least get meshes and create 3D printable object.

    It’s now working on my laptop and within a Docker container( with 3D acceleration ). :-)

    • Nice, glad you figured it out. Some times MeshLab doesn’t show color if it doesn’t like the format. Try saving it in different format and see if color shows up in MeshLab.

      • It was me and/or Meshlab who lost the color when saving a mesh after cleanup and trying to then get color from it. I pulled in the original mesh, transferred the color from that and it showed up.

        Boy, it’s a good thing I’m on “vacation” or else there’s no way I could have spend the continuous time on this getting it working. It doesn’t look like anyone is working on this stuff any more since Apple bought PrimeSense but I’ll blog what I did and post my Docker setup so anyone can repeat it much quicker. Thanks for the help and the blog info.

        BTW, I created a blog for this and other things: http://gottahackit.com

      • Cool, nice work!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: