Life is a game, take it seriously

Archive for the ‘Robotics’ Category

RSS 2018 Highlights

In Machine Learning, Paper Talk, Robotics on July 10, 2018 at 3:18 pm

by Li Yang Ku (Gooly)

I was at RSS (Conference on Robotics Science and System) in Pittsburgh a few weeks ago. The conference was held in the Carnegie music hall and the conference badge can also be used to visit the two Carnegie museums next to it. (The Eskimo and native American exhibition on the third floor is a must see. Just in case you don’t know, an igloo can be built within 1.5 hours by just two Inuits and there is a video of it.)

RSS is a relatively small conference compared to IROS and ICRA. With only one single track, you get to see every accepted paper from many different fields ranging from robotic whiskers to surgical robots. I would however argue that the highlights of this year’s RSS are the Keynote talks by Bernardine Dias and Chad Jenkins. Unlike most keynote talks I’ve been to, these two talks were less about new technologies but about humanity and diversity. In this post, I am going to talk about both talks plus a few interesting papers in RSS.

a) Bernardine Dias, “Robotics technology for underserved communities: challenges, rewards, and lessons learned.”

Bernadine’s group focuses on changing technologies so that they can be accessible to communities that are left behind. One of the technologies developed was a tool for helping blind students learn braille and it had significant impact among blind communities across the globe. Bernadine gave an amazing talk at RSS. However, the video of her talk is not public yet (not sure if it will be) and surprisingly not many videos of her are on the internet. The closest content I can find is a really nice audio interview with Bernardine. There is also a short video describing their work below, but what this talk is really about is not the technology or design but the lessons learned through helping these underserved communities.

When roboticist talk about helping the society, many of them focus on the technology and left the actual application to the future. Bernadine’s group are different in that they actually travel to these underserved communities to understand what they need and integrate their feedbacks to the design process directly. This is easier said then done. You have to understand each community before your visit, some acts are considered good in one culture but an insult in another. Giving without understanding often results in waste. Bernardine mentioned in her talk that one of the schools in an underserved community they collaborated with received a large one-time donations for buying computers. It was a large event where important people came and was broadcasted on the news. However, to accommodate these hardwares, this two classroom school has to give up one of there classrooms and therefore reduce the number of classes they can teach. Ironically, the school does not have resources to power these computers nor people to teach students or teachers how to use them. The donation actually result in more harm then help to the community.

b) Odest Chadwicke (Chad) Jenkins, “Robotics: Making the World a Better Place through Minimal Message-oriented Transport Layers .”

While Bernardine tries to change technologies for underserved communities, Chad tries to design interfaces for helping people with disability by deploying robots to their home. Chad showed some of the work done by Charlie Kemp’s group and his lab with Henry Evans. Henry Evans was a successful financial officer at silicon valley until he had a stroke that caused him paralyzed and mute. However, Henry did not give up living fully and strived in advocating robots for people with disability. Henry’s story is inspiring and an example of how robots can help people with disability live freely. The robot for humanity project is the result of these successful collaborations. Since then, Henry gave three Ted talks through robots and the one below shows how Chad helped him fly a quadrotor.

 

However, the highlight of Chad’s talk was when he called out for more diversity in the community. Minorities, especially African Americans and Latinos, are way underrepresented in robotics communities in the U.S. The issue of diversity is usually not what roboticist or computer scientist would thought of or list as a priority. Based on Chad’s numbers, past robotics conferences including RSSs were not immune to these kind of negligence. This not hard to see, among the thousands of conference talks I’ve been to there were probably no more then three talks by African American speakers. Although there are no obvious solutions to solve this problem yet, having the community aware or agree that this is a problem is an important first step. Chad urges people to be aware of whether everyone is given equal opportunities and simply being friendly to isolated minorities in a conference may make a difference in the long run.

c) Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. “Differential Particle Filters.”

This work introduces a differentiable particle filter (DPF) that can be trained end to end. The DPF is composed of a action sampler that generates action samples, an observation encoder, a particle proposer that learns to generate new particles based on observations, and an observation likelihood estimator that weights each particle. These four components are feedforward networks that can be learned through training data. What I found interesting is that the authors made comments similar to the authors of the paper Deep Image Prior; deep learning approaches work not just because of learning but also because of the engineered structure such as convolutional layers that encode priors. This motivated the authors to look for architectures that can encode prior knowledge of algorithms into the neural network.

d) Marc Toussaint, Kelsey R. Allen, Kevin A. Smith, and Joshua B. Tenenbaum. “Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning.”

Task and Motion Planning (TAMP) approaches are about combining symbolic task planners and geometric motion planners hierarchically. Symbolic task planners can be helpful in solving tasks sequences based on high level logic, while geometric planners operate in detailed specifications of the world state. This work is an extension that further considers dynamic physical interactions. The whole robot action sequence is modeled as a sequence of modes connected by switches. Modes represent durations that have constant contact or can be modeled by kinematic abstractions. The task can therefore be written in the form of a Logic-Geometric Program where the whole sequence can be jointly optimized. The video above show that such approach can solve tasks that the authors call physical puzzles. This work also won the best paper at RSS.

Advertisements

Talk Picks: IROS 2017

In deep learning, Machine Learning, Robotics on February 10, 2018 at 1:06 pm

by Li Yang Ku (Gooly)

I was at IROS (International Conference on Intelligent Robots and Systems) in Vancouver recently (September 2017, this post took way too long to finish)  to present one of my work done almost two years ago. Interestingly, there are four deep learning related sessions this year and there are quite a few papers that I found interesting, however the talks at IROS were what I found the most inspiring. I am going to talk about three of them in the following.

a) “Toward Unifying Model-Based and Learning-Based Robotics”, plenary talk by Dieter Fox.  

In my previous post, I talked about how the machine learning field differs from the robotics field, where machine learning learns from data and robotics designs models that describe the environment. In this talk, Dieter tries to glue both worlds together. This 50 minutes talk is posted below. For those who don’t have 50 minutes, I describe the talk briefly in the following.

Dieter first described a list of work his lab did (robot localization, RGB-D matching, real time tracking, etc.) using model based approaches. Model based approaches matches models to data streams and controls the robot by finding actions that reaches the desired state. One of the benefits of such approach is that our own knowledge of how the physical world works can be injected into the model. Dieter then gave a brief introduction on deep learning and on one of his students work on learning visual descriptors in a self-supervised way, which I covered in a previous post. Based on the recent success in deep learning, Dieter suggested that there are ways to incorporate model based approaches into a deep learning framework and show an example on how we can add knowledge of rigid body motion into a network by forcing it to output segmentations and their poses. The overall conclusion is that 1) model based approaches are accurate within a local basin of attraction which the models match the environment, 2) deep learning provide larger basin of attraction in the trained regime, 3) Unifying both approaches give you more powerful systems.

 

b) “Robotics as the Path to Intelligence”, keynote talk by Oliver Brock

Oliver Brock gave an exciting interactive talk on understanding intelligence in one of the IROS keynote sessions. Unfortunately it is not recorded and the given slides cannot be distributed, so I posted the most similar talk he gave below instead. It is also a pretty good talk with some of the contents overlapped but under a different topic.

In the IROS talk, Oliver made a few points. First, he start out with the AlphaGo by Deepmind, stating that its success in the game go is very similar to the IBM Deep Blue that beats the chess champion in 1996. In both cases, despite the system’s superior game play performance, it needs a human to play for it. A lot of things that humans are good at are usually difficult to our current approach to artificial intelligence. How we define intelligence is crucial because it will shape our research direction and how we solve problems. Oliver then showed that defining intelligence is non-trivial and has to do with what we perceive by performing an interactive experiment with the audience. He then talked about his work on integrating cross model perception and action, the importance of manipulation towards intelligence, and soft hands that can solve hard manipulation problems.

 

c) “The Power of Procrastination”, special event talk by Jorge Cham

This is probably the most popular talk of all the IROS talks. The speaker Jorge Cham is the author of the popular PHD Comics (which I may have posted on my blog without permission) and has a PhD degree in robotics from Stanford university. The following is not the exact same talk he gave in IROS but very similar.

 

Machine Learning, Computer Vision, and Robotics

In Computer Vision, Machine Learning, Robotics on December 6, 2017 at 2:32 pm

By Li Yang Ku (Gooly)

Having TA’d for Machine Learning this semester and worked in the field of Computer Vision and Robotics for the past few years, I always have this feeling that the more I learn the less I know. Therefore, its sometimes good to just sit back and look at the big picture. This post will talk about how I see the relations between these three fields in a high level.

First of all, Machine Learning is more a brand then a name. Just like Deep Learning and AI, this name is used for getting funding when the previous name used is out of hype. In this case, the name popularized after AI projects failed in the 70s. Therefore, Machine learning covers a wide range of problems and approaches that may look quite different at first glance. Adaboost and support vector machine was the hot topic in Machine Learning when I was doing my master’s degree, but now it is deep neural network that gets all the attention.

Despite the wide variety of research in Machine Learning, they usually have this common assumption on the existent of a set of data. The goal is then to learn a model based on this set of data. There are a wide range of variations here, the data could be labeled or not labeled resulting in supervised or unsupervised approaches; the data could be labeled with a category or a real number, resulting in classification or regression problems; the model can be limited to a certain form such as a class of probability models, or can have less constraints in the case of deep neural network. Once the model is learned, there are also a wide range of possible usage. It can be used for predicting outputs given new inputs, filling missing data, generating new samples, or providing insights on hidden relationships between data entries. Data is so fundamental in Machine Learning, people in the field don’t really ask the question of why learning from data. Many datasets from different fields are collected or labeled and the learned models are compared based on accuracy, computation speed, generalizability, etc. Therefore Machine Learning people often consider Computer Vision and Robotics as areas for applying Machine Learning techniques.

Robotics on the other hand comes from a very different background. There are usually no data to start with in robotics. If you cannot control your robot or if your robot crashes itself at first move, how are you going to collect any data. Therefore, classical robotics is about designing models based on physics and geometries. You build models that model how the input and current observation of the robot changes the robot state. Based on this model you can infer the input that will safely control the robot to reach certain state.

Once you can command your robot to reach certain state, a wide variety of problems emerge. The robot will then have to do obstacle avoidance and path planning to reach certain goal. You may need to to find a goal state that satisfies a set of restrictions while optimizing a set of properties. Simultaneous localization and mapping (SLAM) may be needed if no maps are given. In addition, sensor fusion is required when multiple sensors with different properties are used. There may also be uncertainties in robot states where belief space planning may be helpful. For robots with a gripper, you may also need to be able to identify stable grasps and recognizing the type and pose of an object for manipulation. And of course, there is a whole different set of problems on designing the mechanics and hardware of the robot.  Unlike Machine Learning, a lot of approaches of these problems are solved without a set of data. However, most of these robotics problems (excluding mechanical and hardware problems) share a common goal of determining the robot input based on feedback. (Some) Roboticists view robotics as the field that has the ultimate goal of creating machines that act like humans, and Machine Learning and Computer Vision are fields that can provide methods to help accomplish such goal.

The field of Computer Vision started under AI in the 60s under the goal of helping robots to achieve intelligent behaviors, but left such goal behind after the internet era when tons of images on the internet are waiting to be classified. In this age, computer vision applications are no longer restricted to physical robots. In the past decade, the field of Computer Vision is driven by datasets. The implicit agreement on evaluation based on standardized datasets helped the field to advance in a reasonably fast pace (under the cost of millions of grad student hours on tweaking models to get a 1% improvement.) Given these datasets, the field of Computer Vision inevitably left the Robotics community and embraced the data-driven Machine Learning approaches. Most Computer Vision problems have a common goal of learning models for visual data. The model is then used to do classification, clustering, sample generation, etc. on images or videos. The big picture of Computer Vision can be seen in my previous post. Some Computer Vision scientists consider vision different from other senses and believe that the development of vision is fundamental to the evolution of intelligence (which could be true… experiments do show 50% of our brain neurons are vision related.) Nowadays, Computer Vision and Machine Learning are deeply tangled; Machine Learning techniques help foster Computer Vision solutions, while successful models in Computer Vision contribute back to the field of Machine Learning. For example, the successful story of Deep Learning started from Machine Learning models being applied to the ImageNet challenge, and end up with a wide range of architectures that can be applied to other problems in Machine Learning. On the other hand, Robotics is a field where Computer Vision folks are gradually moving back to. Several well known Computer Vision scientists, such as Jitendra Malik, started to consider how Computer Vision can help the field of Robotics ,since their conversation with Robotics colleagues were mostly about vision not working, based on the recent success on data-driven approaches in Computer Vision.

Paper Picks: ICRA 2017

In Computer Vision, deep learning, Machine Learning, Paper Talk, Robotics on July 31, 2017 at 1:04 pm

by Li Yang Ku (Gooly)

I was at ICRA (International Conference on Robotics and Automation) in Singapore to present one of my work this June. Surprisingly, the computer vision track seems to gain a lot of interest in the robotics community. The four computer vision sessions are the most crowded ones among all the sessions that I have attended. The following are a few papers related to computer vision and deep learning that I found quite interesting.

a) Schmidt, Tanner, Richard Newcombe, and Dieter Fox. “Self-supervised visual descriptor learning for dense correspondence.”

In this work, a self-supervised learning approach is introduced for generating dense visual descriptors with convolutional neural networks. Given a set of RGB-D videos of Schmidt, the first author, wandering around, a set of training data can be automatically generated by using Kinect Fusion to track feature points between frames. A pixel-wise contrastive loss is used such that two points belong to the same model point would have similar descriptors.

Kinect Fusion cannot associate points between videos, however with just training data within the same video, the authors show that the learned descriptors of the same model point (such as the tip of the nose) are similar across videos. This can be explained by the hypothesis that with enough data, a model point trajectory will inevitably come near to the same model point trajectory in another video. By chaining these trajectories, clusters of the same model point can be separated even without labels. The figure above visualizes the learned features with colors. Note that it learns a similar mapping across videos despite with no training signal across videos.

b) Pavlakos, Georgios, Xiaowei Zhou, Aaron Chan, Konstantinos G. Derpanis, and Kostas Daniilidis. “6-dof object pose from semantic keypoints.”

In this work, semantic keypoints predicted by convolutional neural networks are combined with a deformable shape model to estimate the pose of object instances or objects of the same class. Given a single RGB image of an object, a set of class specific keypoints is first identified through a CNN that is trained on labeled feature point heat maps. A fitting problem that maps these keypoints to keypoints on the 3D model is then solved using a deformable model that captures different shape variability. The figure above shows some pretty good results on recognizing the same feature of objects of the same class.

The CNN used in this work is the stacked hourglass architecture, where two hourglass modules are stacked together. The hourglass module was introduced in the paper “Newell, Alejandro, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. ECCV, 2016.” An hourglass module is similar to a fully convolutional neural network but with residual modules, which the authors claim to make it more balanced between down sampling and up sampling. Stacking multiple hourglass modules allows repeated bottom up, top down inferences which improves on the state of the art performances.

c) Sung, Jaeyong, Ian Lenz, and Ashutosh Saxena. “Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories.”

In this work, point cloud, natural language, and manipulation trajectory data are mapped to a shared embedding space using a neural network. For example, given the point cloud of an object and a set of instructions as input, the neural network should map it to a region in the embedded space that is close to the trajectory that performs such action. Instead of taking the whole point cloud as input, a segmentation process that decides which part of the object to manipulate based on the instruction is first executed. Based on this shared embedding space, the closest trajectory to where the input point cloud and language map to can be executed during test time.

In order to learn a semantically meaningful embedding space, a loss-augmented cost that considers the similarity between different types of trajectory is used. The result shows that the network put similar groups of actions such as pushing a bar and moving a cup to a nozzle close to each other in the embedding space.

d) Finn, Chelsea, and Sergey Levine. “Deep visual foresight for planning robot motion.”

In this work, a video prediction model that uses a convolutional LSTM (long short-term memory) is used to predict pixel flow transformation from the current frame to the next frame for a non-prehensile manipulation task. This model takes the input image, end-effector pose, and a future action to predict the image of the next time step. The predicted image is then fed back into the network recursively to generate the next image. This network is learned from 50000 pushing examples of hundreds of objects collected from 10 robots.

For each test, the user specifies where certain pixels on an object should move to, the robot then uses the model to determine actions that will most likely reach the target using an optimization algorithm that samples actions for several iterations. Some of the results are shown in the figure above, the first column indicates the interface where the user specifies the goal. The red markers are the starting pixel positions and the green markers of the same shape are the goal positions. Each row shows a sequence of actions taken to reach the specified target.

Convolutional Neural Network Features for Robot Manipulation

In Computer Vision, deep learning, Robotics on October 24, 2016 at 6:30 am

by Li Yang Ku (Gooly)

bender_turtle

In my previous post, I mentioned the obstacles when applying deep learning techniques directly to robotics. First, training data is harder to acquire; Second, interacting with the world is not just a classification problem. In this post, I am gonna talk about a really simple approach that treats convolutional neural networks (CNNs) as a feature extractor that generates a set of features similar to traditional features such as SIFT. This idea is applied to grasping on Robonaut 2 and published in arXiv (Associating Grasp Configurations with Hierarchical Features in Convolutional Neural Networks) with more details. The ROS package called ros-deep-vision that generates such features using a RGB-D sensor is also public.

Hierarchical CNN Features

 

When we look at these deep models such as CNNs, we should keep in mind that these models work well because how the layers stack up hierarchically matches how the data is structured. Our observed world is also hierarchical, there are common shared structures such as edges that can be used to represent more complex structures such as squares and cubes when combined in meaningful ways. A simple view of CNN is just a tree structure, where a higher level neuron is a combination of neurons in the previous layer. For example, a neuron that represents cuboids is a combination of neurons that represent the corners and edges of the cuboid. The figures above show such examples on neurons that found to activate consistently on cuboids and cylinders.

Deep Learning for Robotics

By taking advantage of this hierarchical nature of CNN, we can turn a CNN into a feature extractor that generates features that represents local structures of a higher level structure. For example, such hierarchical features can represent the left edge of the top face of a box while traditional edge detectors would find all edges in the scene. Instead of representing a feature with a single filter (neuron) in one of the CNN layers, this feature, which we call hierarchical CNN feature, uses a tuple of filters from different layers. Using backpropagation that restricts activation to one filter per layer allows us to locate the location of such feature precisely. By finding features such as the front and back edge of the top face of a box we can learn where to place robot fingers relative to these hierarchical CNN features in order to manipulate the object.

robonaut 2 grasping

 

Convolutional Neural Networks in Robotics

In Computer Vision, deep learning, Machine Learning, Neural Science, Robotics on April 10, 2016 at 1:29 pm

by Li Yang Ku (Gooly)

robot using tools

As I mentioned in my previous post, Deep Learning and Convolutional Neural Networks (CNNs) have gained a lot of attention in the field of computer vision and outperformed other algorithms on many benchmarks. However, applying these technics to robotics is non-trivial for two reasons. First, training large neural networks requires a lot of training data and collecting them on robots is hard. Not only do research robots easily have network or hardware failures after many trials, the time and resource needed to collect millions of data is also significant. The trained neural network is also robot specific and cannot be used on a different type of robot directly, therefore limiting the incentive of training such network. Second, CNNs are good for classification but when we are talking about interacting with a dynamic environment there is no direct relationship. Knowing you are seeing a lightsaber gives no indication on how to interact with it. Of course you can hard code this information, but that would just be using Deep Learning in computer vision instead of robotics.

Despite these difficulties, a few groups did make it through and successfully applied Deep Learning and CNNs in robotics; I will talk about three of these interesting works.

  • Levine, Sergey, et al. “End-to-end training of deep visuomotor policies.” arXiv preprint arXiv:1504.00702 (2015). 
  • Finn, Chelsea, et al. “Deep Spatial Autoencoders for Visuomotor Learning.” reconstruction 117.117 (2015): 240. 
  • Pinto, Lerrel, and Abhinav Gupta. “Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours.” arXiv preprint arXiv:1509.06825 (2015).

Deep Learning in Robotics

Traditional policy search approaches in reinforcement learning usually use the output of a “computer vision systems” and send commands to low-level controllers such as a PD controller. In the paper “end-to-end training of deep visuomotor policies”, Sergey, et al. try to learn a policy from low-level observations (image and joint angles) and output joint torques directly. The overall architecture is shown in the figure above. As you can tell this is ambitious and cannot be easily achieved without a few tricks. The authors first initialize the first layer with weights pre-trained on the ImageNet, then train vision layers with object pose information through pose regression. This pose information is obtained by having the robot holding the object with its hand covered by a cloth similar to the back ground (See figure below). robot collecting pose information

In addition to that, using the pose information of the object, a trajectory can be learned with an approach called guided policy search. This trajectory is then used to train the motor control layers that takes the visual layer output plus joint configuration as input and output joint torques. The results is better shown then described; see video below.

The second paper, “Deep Spatial Autoencoders for Visuomotor Learning”, is done by the same group in Berkeley. In this work, the authors try to learn a state space for reinforcement learning. Reinforcement learning requires a detailed representation of the state; in most work such state is however usually manually designed. This work automates this state space construction from camera image where the deep spatial autoencoder is used to acquire features that represent the position of objects. The architecture is shown in the figure below.

Deep Autoencoder in Robotics

The deep spatial autoencoder maps full-resolution RGB images to a down-sampled, grayscale version of the input image. All information in the image is forced to pass through a bottleneck of spatial features therefore forcing the network to learn important low dimension representations. The position is then extracted from the bottleneck layer and combined with joint information to form the state representation. The result is tested on several tasks shown in the figure below.

Experiments on Deep Auto Encoder

As I mentioned earlier gathering a large amount of training data in robotics is hard, while in the paper “Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours” the authors try to show that it is possible. Although still not comparable to datasets in the vision community such as ImageNet, gathering 50 thousand tries in robotics is significant if not unprecedented. The data is gathered using this two arm robot Baxter that is (relatively) mass produced compared to most research robots.

Baxter Grasping

 

The authors then use these collected data to train a CNN initialized with weights trained on ImageNet. The final output is one out of 18 different orientation of the gripper, assuming the robot always grab from the top. The architecture is shown in the figure below.

Grasping with Deep Learning

Light Weighted Vision Algorithms for a Light Weighted Aerial Vehicle

In Computer Vision, Robotics on January 15, 2013 at 10:50 pm

by Gooly (Li Yang Ku)

Recently, light weighted quadrotors that equip a camera like the AR Drone becomes cheap and interesting enough to be used in research even for poor students. By combining vision algorithms with a quadrotor you can really expand your imagination even to robot fishing.

quadrotor fishing

SLAM (Simultaneous Localization and Mapping) is a pretty hot topic in recent years; since in many situations a detailed map is not available for robots, it would be largely preferred if a robot could navigate in an unknown environment and  draw the map itself like how we humans do. Previously most SLAM researches use mobile robots with a laser scanner that builds up a decent map combining laser result with dead reckoning. However, a laser scanner weighs quite a lot and is not ideal to be mounted on a light weighted aerial vehicle. This is where visual SLAM take place. Visual SLAM is the name for all SLAM techniques that uses mainly visual inputs. (Mostly monocular camera only)

One of the early works of visual SLAM is MonoSLAM, which is done in Oxford. The video above shows how a 3D map is built by recognizing relative position of features merely using  a 2D camera. PTAM further expands this concept and uses 2 threads to improve performance. They provide open sourced codes on their website and was widely tested. The code was further ported to ROS by ETH Zurich and tested in the European SFly project on a few UAVs.

ROS also has a quadrotor simulator package where you can test your algorithms before crashing your real quadrotor.

All the publications and websites you should know about quadrotors are organized here.

RVIZ: a good reason to implement a vision system in ROS

In Computer Vision, Point Cloud Library, Robotics on November 18, 2012 at 2:33 pm

by Gooly (Li Yang Ku)

It might seem illogical to implement a vision system in ROS (Robot Operating System) if you are working on pure vision, however after messing with ROS and PCL for a year I can see the advantages of doing this. To clarify, we started to use ROS only because we need it to communicate with Robonaut 2, but the package RVIZ in ROS are truly very helpful such that I would recommend it even if no robots are involved.

(Keynote speech about Robonaut 2 and ROS from the brilliant guy I work for)

 

RVIZ is a ROS package that visualizes robots, point clouds, etc. Although PCL does provide a visualizer for point cloud, it only provides the most basic visualize function. It is really not comparable with what RVIZ can give you.

  1. RVIZ is perfect for figuring out what went wrong in a vision system. The list on the left has a check box for each item. You can show or hide any visual information instantly.
  2. RVIZ provides 3D visualization which you could navigate with just your mouse. At first I prefer the kind of navigation similar to Microsoft Robotic Studio or Counter Strike. But once you get used to it, it is pretty handy. Since I already have 2 keyboards and 2 mouses, it’s quiet convenient to move around with my left mouse while not leaving my right hand from my right mouse.
  3. The best part of RVIZ is the interactive marker. This is the part where you can be really creative. It makes selecting a certain area in 3D relative easy. You can therefore adjust your vision system manually while it is still running such as select a certain area as your work space and ignoring other region.
  4. You can have multiple vision processes showing vision data in the same RVIZ. You simply have to publish the point cloud or shape you want to show using the ROS publishing method. Visualizing is relatively painless once you get used to it.

Try not to view ROS as an operating system like Windows, Linux. It is more like internet, where RVIZ is just one service like google map, and you can write your own app that queries the map if you use the same communication protocol provided by ROS.