Life is a game, take it seriously

Paper Picks: IROS 2018

In AI, deep learning, Paper Talk, Robotics on December 30, 2018 at 4:18 pm

By Li Yang Ku (Gooly)

I was at IROS in Madrid this October presenting some fan manipulation work I did earlier (see video below), which the King of Spain also attended (see figure above.) When the King is also talking about deep learning, you know what is a hype the trend in robotics. Madrid is a fabulous city, so I am only able to pick a few papers below to share.

 

a) Roberto Lampariello, Hrishik Mishra, Nassir Oumer, Phillip Schmidt, Marco De Stefano, Alin Albu-Schaffer, “Tracking Control for the Grasping of a Tumbling Satellite with a Free-Floating Robot”

This is work done by folks at DLR (the German Aerospace Center). The goal is to grasp a satellite that is tumbling with another satellite. As you can tell this is a challenging task and this work presents progress extended from a series of previous work done by different space agencies. Research on related grasping tasks can be roughly classified as feedback control methods that solves a regulation control problem and optimal control approaches that computes a feasible optimal trajectory using an open loop approach. In this work, the authors proposes a system that combines both feedback and optimal control. This is achieved by using a motion planner which is generated off-line with all relevant constraints to provide visual servoing a reference trajectory. Servoing will deviate from the original plan but the gross motion will be maintained to avoid motion constraints (such as singularity.) This approach is tested on a gravity free facility. If you haven’t seen one of these zero gravity devices, they are quite common among space agencies and are used to turn off gravity (see figure above.)

b) Josh Tobin, Lukas Biewald , Rocky Duan , Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew, Alex Ray, Jonas Schneider, Peter Welinder, Wojciech Zaremba, Pieter Abbeel, “Domain Randomization and Generative Models for Robotic Grasping.”

This is work done at OpenAI (mostly) that tries to tackle grasping with deep learning. Previous works on grasping with deep learning are usually trained on at most thousands of unique objects, which is relatively small compared to datasets for image classification such as ImageNet. In this work, a new data generation pipeline that cuts meshes and combine them randomly in simulation is proposed. With this approach the authors generated a million unrealistic training data and show that it can be used to learn grasping on realistic objects and achieve similar to state of the art accuracy. The proposed architecture is shown above, α is a convolutional neural network, β is a autoregressive model that generates n different grasps (n=20), and γ is another neural network trained separately to evaluate the grasp using the likelihood of success of the grasp calculated by the autoregressive model plus another observation from the in-hand camera. This use of autoregressive model is an interesting choice where the authors claimed to be advantageous since it can directly compute the likelihood of samples.

c) Barrett Ames, Allison Thackston, George Konidaris, “Learning Symbolic Representations for Planning with Parameterized Skills.”

This is a planning work (by folks I know) that combines parameterized motor skills with higher level planning. At each state the robot needs to select both an action and how to parameterize it. This work introduces a discrete abstract representation for such kind of planning and demonstrated it on Angry Birds and a coffee making task (see figure above.) The authors showed that the approach is capable of generating a state representation that requires very few symbols (here symbols are used to describe preconditions and state estimates), therefore allow an off-the-shelf probabilistic planner to plan faster. Only 16 symbols are needed for the Angry Bird task (not the real Angry Bird, a simpler version) and a plan can be found in 4.5ms. One of the observation is that the only parameter settings needed to be represented by a symbol are the ones that maximizes the probability of reaching the next state on the path to the goal.

Advertisements

RSS 2018 Highlights

In Machine Learning, Paper Talk, Robotics on July 10, 2018 at 3:18 pm

by Li Yang Ku (Gooly)

I was at RSS (Conference on Robotics Science and System) in Pittsburgh a few weeks ago. The conference was held in the Carnegie music hall and the conference badge can also be used to visit the two Carnegie museums next to it. (The Eskimo and native American exhibition on the third floor is a must see. Just in case you don’t know, an igloo can be built within 1.5 hours by just two Inuits and there is a video of it.)

RSS is a relatively small conference compared to IROS and ICRA. With only one single track, you get to see every accepted paper from many different fields ranging from robotic whiskers to surgical robots. I would however argue that the highlights of this year’s RSS are the Keynote talks by Bernardine Dias and Chad Jenkins. Unlike most keynote talks I’ve been to, these two talks were less about new technologies but about humanity and diversity. In this post, I am going to talk about both talks plus a few interesting papers in RSS.

a) Bernardine Dias, “Robotics technology for underserved communities: challenges, rewards, and lessons learned.”

Bernadine’s group focuses on changing technologies so that they can be accessible to communities that are left behind. One of the technologies developed was a tool for helping blind students learn braille and it had significant impact among blind communities across the globe. Bernadine gave an amazing talk at RSS. However, the video of her talk is not public yet (not sure if it will be) and surprisingly not many videos of her are on the internet. The closest content I can find is a really nice audio interview with Bernardine. There is also a short video describing their work below, but what this talk is really about is not the technology or design but the lessons learned through helping these underserved communities.

When roboticist talk about helping the society, many of them focus on the technology and left the actual application to the future. Bernadine’s group are different in that they actually travel to these underserved communities to understand what they need and integrate their feedbacks to the design process directly. This is easier said then done. You have to understand each community before your visit, some acts are considered good in one culture but an insult in another. Giving without understanding often results in waste. Bernardine mentioned in her talk that one of the schools in an underserved community they collaborated with received a large one-time donations for buying computers. It was a large event where important people came and was broadcasted on the news. However, to accommodate these hardwares, this two classroom school has to give up one of there classrooms and therefore reduce the number of classes they can teach. Ironically, the school does not have resources to power these computers nor people to teach students or teachers how to use them. The donation actually result in more harm then help to the community.

b) Odest Chadwicke (Chad) Jenkins, “Robotics: Making the World a Better Place through Minimal Message-oriented Transport Layers .”

While Bernardine tries to change technologies for underserved communities, Chad tries to design interfaces for helping people with disability by deploying robots to their home. Chad showed some of the work done by Charlie Kemp’s group and his lab with Henry Evans. Henry Evans was a successful financial officer at silicon valley until he had a stroke that caused him paralyzed and mute. However, Henry did not give up living fully and strived in advocating robots for people with disability. Henry’s story is inspiring and an example of how robots can help people with disability live freely. The robot for humanity project is the result of these successful collaborations. Since then, Henry gave three Ted talks through robots and the one below shows how Chad helped him fly a quadrotor.

 

However, the highlight of Chad’s talk was when he called out for more diversity in the community. Minorities, especially African Americans and Latinos, are way underrepresented in robotics communities in the U.S. The issue of diversity is usually not what roboticist or computer scientist would thought of or list as a priority. Based on Chad’s numbers, past robotics conferences including RSSs were not immune to these kind of negligence. This not hard to see, among the thousands of conference talks I’ve been to there were probably no more then three talks by African American speakers. Although there are no obvious solutions to solve this problem yet, having the community aware or agree that this is a problem is an important first step. Chad urges people to be aware of whether everyone is given equal opportunities and simply being friendly to isolated minorities in a conference may make a difference in the long run.

c) Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. “Differential Particle Filters.”

This work introduces a differentiable particle filter (DPF) that can be trained end to end. The DPF is composed of a action sampler that generates action samples, an observation encoder, a particle proposer that learns to generate new particles based on observations, and an observation likelihood estimator that weights each particle. These four components are feedforward networks that can be learned through training data. What I found interesting is that the authors made comments similar to the authors of the paper Deep Image Prior; deep learning approaches work not just because of learning but also because of the engineered structure such as convolutional layers that encode priors. This motivated the authors to look for architectures that can encode prior knowledge of algorithms into the neural network.

d) Marc Toussaint, Kelsey R. Allen, Kevin A. Smith, and Joshua B. Tenenbaum. “Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning.”

Task and Motion Planning (TAMP) approaches are about combining symbolic task planners and geometric motion planners hierarchically. Symbolic task planners can be helpful in solving tasks sequences based on high level logic, while geometric planners operate in detailed specifications of the world state. This work is an extension that further considers dynamic physical interactions. The whole robot action sequence is modeled as a sequence of modes connected by switches. Modes represent durations that have constant contact or can be modeled by kinematic abstractions. The task can therefore be written in the form of a Logic-Geometric Program where the whole sequence can be jointly optimized. The video above show that such approach can solve tasks that the authors call physical puzzles. This work also won the best paper at RSS.

Paper Picks: CVPR 2018

In Computer Vision, deep learning, Machine Learning, Neural Science, Paper Talk on July 2, 2018 at 9:08 pm

by Li Yang Ku (Gooly)

I was at CVPR in salt lake city. This year there were more then 6500 attendances and a record high number of accepted papers. People were definitely struggling to see them all. It was a little disappointing that there were no keynote speakers, but among the 9 major conferences I have been to, this one has the best dance party (see image below). You never know how many computer scientists can dance until you give them unlimited alcohol.

In this post I am going to talk about a few papers that were not the most popular ones but were what I personally found interesting. If you want to know the papers that the reviewers though were interesting instead, you can look into the best paper “Taskonomy: Disentangling Task Transfer Learning” and four other honorable mentions including the “SPLATNet: Sparse Lattice Networks for Point Cloud Processing” from collaborations between Nvidia and some people in the vision lab at UMass Amherst which I am in.

a) Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. “Learning and Using the Arrow of Time.”

I am quite fond of works that explore cues in the world that may be useful for unsupervised learning. Traditional deep learning approaches requires large amount of labeled training data but we humans seem to be able to learn from just interacting with the world in an unsupervised fashion. In this paper, the direction of time is used as a clue. The authors train a neural network to distinguish the direction of time and show that such network can be helpful in action recognition tasks.

b) Arda Senocak, Tae-Hyun Oh, Junsik Kim, Ming-Hsuan Yang, and In So Kweon. “Learning to Localize Sound Source in Visual Scenes.”

This is another example of using cues available in the world. In this work, the authors ask whether a machine can learn the correspondence between visual scene and sound, and localize the sound source only by observing sound and visual scene pairs like humans? This is done by using a triplet network that tries to minimize the difference between visual feature of a video frame and the sound feature generated in a similar time window, while maximizing the difference between the same visual feature and a random sound feature. As you can see in the figure above, the network is able to associate different sounds with different visual regions.

c) Edward Kim, Darryl Hannan, and Garrett Kenyon. “Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons.”

This work is inspired by experiments done by Quiroga et al. that found a single neuron in one human subject’s brain that fires on both pictures of Halle Berry and texts of Halle Berry’s name. In this paper, the authors show that training a deep sparse coding network that takes a face image and a text image of the corresponding name results in learning a multimodal invariant neuron that fires on both Halle Berry’s face and name. When certain modality is missing, the missing image or text can be generated. In this network, each sparse coding layer is learned through the Locally Competitive Algorithm (LCA) that uses principles of thresholding and local competition between neurons. Top down feedback is also used in this work through propagating reconstruction error downwards. The authors show interesting results where adding information to one modality changes the belief of the other modality. The figure above shows that this Halle Berry neuron in the sparse coding network can distinguish between cat women acted by Halle Berry versus cat women acted by Anne Hathaway and Michele Pfeiffer.

d) Assaf Shocher, Nadav Cohen, and Michal Irani. “Zero-Shot Super-Resolution using Deep Internal Learning.”

Super resolution is a task that tries to increase the resolution of an image. The typical approach nowaday is to learn it through a neural network. However, the author showed that this approach only works well if the down sampling process from the high resolution to the low resolution image is similar in training and testing. In this work, no training is needed beforehand. Given a test image, training examples are generated from the test image by down sampling patches of this same image. The fundamental idea of this approach is the fact that natural images have strong internal data repetition. Therefore, from the same image you can infer high resolution structures of lower resolution patches by observing other parts of the image that have higher resolution and similar structure. The image above shows their results (top row) versus state of the art results (bottom row).

e) Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. “Deep Image Prior.”

Most modern approaches for denoising, super resolution, or inpainting tasks use an image generation network that trains on a large dataset that consist of pairs of images before and after the affect. This work shows that these nice outcomes are not just the result of learning but also the effect of the convolutional structure. The authors take an image generation network, feed random noise as input, and then update the network using the error between the outcome and the test image, such as the left image shown above for inpainting. After many iterations, the network magically generates an image that fills the gap, such as the right image above. What this works says is that unlike common belief that deep learning approaches for image restoration learns image priors better then engineered priors, the deep structure itself is just a better engineered prior.