Life is a game, take it seriously

Talk the Talk: Optimization’s Untold Gift to Learning

In AI, Computer Vision, deep learning, Machine Learning on October 13, 2019 at 10:40 am

by Li Yang Ku (Gooly)

deep learning optimization

In this post I am going to talk about a fascinating talk by Nati Srebro at ICML this June. Srebro have given similar talks at many places but I think he really nailed it this time. This talk is interesting not only because he provided a different view of the role of optimization in deep learning but also because he clearly explained why many researcher’s argument on the reason that deep learning works doesn’t make sense.

Srebro first look into what we know about deep learning (typical feed forward network) based on three questions. The first question is regarding the capacity of the network. How many samples do we need to learn certain network architecture? The short answer is that it should be proportional to the number of parameters in the network, which is the total number of edges. The second question is about the expressiveness of the network. What can we express with certain model class? What type of questions can we learn? Since a two layer neural network is a universal approximator, it can learn any continuous function, this is however not a very useful information since it may require an exponentially large network and exponential amount of samples to learn. So the more interesting question is what can we express with a reasonable sized network? Many recent research more or less focuses on this question. However, Srebro argues that since there is another theory that says any function that can be executed within a reasonable amount of time can be captured by a network of reasonable size (please comment below if you know what theory this is), all problems that we expect to be solvable can be expressed by a reasonable sized network.

The third question is about computation. How hard is it to find optimal parameters? The bad news is that finding the weights for even tiny networks is NP-Hard. Theories (link1 link2) show that even if the training data can be perfectly expressed by a small neural network there are no polynomial time algorithm to find such set of weights. This means that neural network’s expressiveness described in question 2 doesn’t really do much good since we aren’t capable of finding the optimal solution. But we all know that in reality neural network works pretty well, it seems that there are some magical property that allows us to learn neural networks. Srebro emphasizes that we still don’t know what is the magical property that makes neural networks learnable, but we do know it is not because we can represent the data well with the network. If you ask vision folks why neural networks work, they might say something like the lower layers of the network matches low level visual features and the higher layers match higher level visual features. However, this answer is about the expressiveness of the network described in question 2 which is not sufficient for explaining why neural networks work and provides zero evidence since we already know neural networks have the power to express any problem.

Srebro then talked about the observed behavior that neural networks usually don’t overfit to the training data. This is an unexpected property quite similar to the behavior of Adaboost, which was invented in 1997 and quite popular in the 2000s. It was only after the invention that people discovered that the reason Adaboost doesn’t overfit is because it is implicitly minimizing the L-1 norm that limits the complexity. So the question Srebro pointed out was whether the gradient decent algorithm for learning neural networks are also implicitly minimizing certain complexity measure that would be beneficial in reaching a solution that would generalize. Given a set of training data, a neural network can have multiple optimal solutions that are global minima (zero training error). However, some of these global minima perform better than the others on the test data. Srebro argues that the optimization algorithm might be doing something else other than just minimizing the training error. Therefore, by changing the optimization algorithm we might observe a difference in how well can a neural network generalize to test data, and this is exactly what Srebro’s group discovered. In one experiment they showed that even though using Adam optimization achieves lower training error then stochastic gradient decent, it actually performs worse on the test data. What this means is that we might not be putting enough emphasize on optimization in the deep learning community where a typical paper looks like the following:

Deep Learning Paper TemplateThe contributions are on the model and loss function, while the optimization is just a brief mention. So the main point Srebro is trying to convey is that different optimization algorithms would lead to different inductive biases, and different inductive biases would lead to different generalization properties. “We need to understand optimization algorithm not just as reaching some global optimum, but as reaching a specific optimum.”

Srebro further talked about a few more works based on these observations. If you are interested by now, you should probably watch the whole video (You would need to fast forward a bit to start.) I am however going to put in a little bit of my own thoughts here. Srebro emphasizes the importance of optimization a lot in this talk and said the deep models we use now can basically express any problem we have, therefore the model is not what makes deep learning work. However, we also know that the model does matter based on claims of many papers that invented new model architectures. So how could both of these claims be true? We have to remember that the model architecture is also part of the optimization process that shapes the geometry which the optimization algorithm is optimizing on. Hence, if the nerual network model provides a landscape that allows the optimization algorithm to reach a desired minimum more easily, it will also generalize better to the test data. In other words, the model and the optimization algorithm have to work together.

The Deep Learning Not That Smart List

In AI, Computer Vision, deep learning, Machine Learning, Paper Talk on May 27, 2019 at 12:00 pm

by Li Yang Ku (Gooly)

Deep learning is one of the most successful scientific story in modern history, attracting billions of investment money in half a decade. However, there is always the other side of the story where people discover the less magical part of deep learning. This post is about a few research (quite a few published this year) that shows deep learning might not be as smart as you think (most of the time they would came up with a way to fix it, since it used to be forbidden to accept paper without deep learning improvements.) This is just a short list, please comment below on other papers that also belong.

a) Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. “Intriguing properties of neural networks.”, ICLR 2014

The first non-magical discovery of deep learning has to go to the finding of adversarial examples. It was discovered that images added with certain unnoticeable perturbations can result in mysterious false detections by a deep network. Although technically the first publication of this discovery should go to the paper “Evasion Attacks against Machine Learning at Test Time” by Battista Biggio et al. published in September 2013 in ECML PKDD, the paper that really caught people’s attention is this one that was put on arxiv in December 2013 and published in ICLR 2014. In addition to having bigger names on the author list, this paper also show adversarial examples on more colorful images that clearly demonstrates the problem (see image below.) Since this discover, there have been continuous battles between the band that tries to increase the defense against attacks and the band that tries to break it (such as “Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples” by Athalye et al.), which leads to a recent paper in ICLR 2019 “Are adversarial examples inevitable?” by Shafahi et al. that questions whether it is possible that a deep network can be free of adversarial examples from a theoretical standpoint.

b) Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. “Deep Image Prior.” CVPR 2018

This is not a paper intended to discover flaws of deep learning, in fact, the result of this paper is one of the most magical deep learning results I’ve seen. The authors showed that deep networks are able to fill in cropped out images in a very reasonable way (see image below, left input, right output) However, it also unveils some less magical parts of deep learning. Deep learning’s success was mostly advertised as learning from data and claimed to work better than traditional engineered visual features because it learns from large amount of data. This work, however, uses no data nor pre-trained weights. It shows that convolution and the specific layered network architecture, (which may be the outcome of millions of grad student hours through trial and error,) played a significant role in the success. In other words, we are still engineering visual features but in a more subtle way. It also raises the question of what made deep learning so successful, is it because of learning? or because thousands of grad students tried all kinds of architectures, lost functions, training procedures, and some combinations turned out to be great?

c) Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.” ICLR 2019.

It was widely accepted in the deep learning community that CNNs recognize objects by combining lower level filters that represent features such as edges into more complex shapes layer by layer. In this recent work, the authors noticed that contrary to what the community believes, existing deep learning models seems to have a strong bias towards textures. For example, a cat with elephant texture is often recognized as an elephant. Instead of learning how a cat looks like, CNNs seem to take the short cut and just try to recognize cat fur. You can find a detailed blog post about this work here.

d) Wieland Brendel, and Matthias Bethge. “Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet.” ICLR 2019.

This is a paper from the same group as the previous paper. Based on the same observations, this paper claims that CNNs are not that different from bag of feature approaches that classifies based on local features. The authors created a network that only looks at local patches in an image without high level spatial information and was able to achieve pretty good result on ImageNet. The author further shuffled features in an image and existing deep learning models seems to be not sensitive to these changes. Again CNNs seem to be taking short cuts by making classifications based on just local features. More on this work can be found in this post.

e) Azulay, Aharon, and Yair Weiss. “Why do deep convolutional networks generalize so poorly to small image transformations?.” rejected by ICLR 2019.

This is a paper that discovered that modern deep networks may fail to recognize images shifted 1 pixel apart, but got rejected because reviewers don’t quite buy-in on the experiments nor the explanation. (the authors made a big mistake of not providing an improved deep network in the paper.) The paper showed that when the image is shifted slightly or if a sequence of frames from a video is given to a modern deep network, jaggedness appear in the detection result (see example below where the posterior probability of recognizing the polar bear varies a lot frame by frame.) The authors further created a dataset from ImageNet with the same images embedded in a larger image frame at a random location and showed that the performance dropped about 30% when the embedded frame is twice the width of the original image. This work shows that despite modern networks getting close to human performance on image classification tasks on ImageNet, it might not be able to generalize to the real world as well as we hoped.

f) Nalisnick, Eric, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. “Do Deep Generative Models Know What They Don’t Know?.” ICLR 2019

This work from DeepMind looks into tackling the problem that when tested on data with a distribution different from training, deep neural network can give wrong results with high confidence. For example, in the paper “Multiplicative Normalizing Flows for Variational Bayesian Neural Networks” by Louizos and Welling, it was discovered that on the MNIST dataset a trained network can be highly confident but wrong when the input number is tilted. This makes deploying deep learning to critical tasks quite problematic. Deep generative models were thought to be a solution to such problems, since it also models the distribution of the samples, it can reject anomalies if it does not belong to the same distribution as the training samples. However, the authors short answer to the question is no; even for very distinct datasets such as digits versus images of horse and trucks, anomalies cannot be identified, and many cases even wrongfully provide stronger confidence than samples that does come from the trained dataset. The authors therefore “urge caution when using these models with out-of-training-distribution inputs or in unprotected user-facing systems.”

Revisiting Behavior-Based Robotics

In AI, Robotics on February 28, 2019 at 9:10 pm

by Li Yang Ku (Gooly)

The maker of the well known Baxter robot, Rethink Robotics, closed their doors last October. Baxter robot, although not perfect, plays an important role in robot history. Its low price tag ($22,000 instead of $100,000) and human safe features (won’t be able to kill grad students) made these robots one of the most common robots among the robotics research community. Unfortunately, that was not enough to survive in the market.

Many of you may heard of Rodney Brooks, the founder and CTO of Rethink Robotics, who was also the director of MIT’s Computer Science & Artificial Intelligence Laboratory (CSAIL) and one of the founders of iRobot, but to me, it would be behavior-based robotics that best describes him. In this post, I am going to revisit Rodney Brooks’ research on behavior-based robotics and explain why it was a big deal back then.

To fully understand behavior-based robotics, we have to go back in time and look at what was happening in the research world before Rodney Brooks started advocating for behavior-based robotics in the 80s. This is right around the time of the early AI winter and before the collapse of the expert system industry. An expert system stores a huge knowledge base of logics describing facts in the world entered by experts. During query the inference engines tries to find a solution based on the given logics. It is not hard to imagine that the robots designed at that time would also be based on this kind of thinking. Shakey, the famous robot build by Stanford Research Institute in the late 60s, uses logic to solve tasks based on a symbolic model of the environment. Despite its national fame, Shakey was designed for an experimental environment consists of big blocks and, as you might know, it was not a technology breakthrough that lead to household robots.

In the late 70s, Rodney Brooks was especially frustrated of these symbolic approaches that tries to model the world in detail. Computers were not fast at that time, and trying to estimate the world model with uncertainty is even more time consuming. In a trip which Rodney was stuck in Thailand, he observed that insects seems to be much more capable than his robots despite having a small nervous system. The realization was that there is no need to model the world because the world is always there, the robot can always sense the world and use it as its own model. This simple idea is basically the core concept of behavior-based robotics.

Rodney went on and proposed the subsumption robotic architecture that is composed of different layers of state machines, which the higher layers subsumes lower layers to create more complicated behaviors. Brooks claims that this approach is radically different from tradition approach that follows the sense-model-plan-act framework. The subsumption architecture is capable of reacting to the world in real-time since the lower layers can produce outputs directly. Instead of executing actions in a pre-planned sequence, the next actions can simply be activated by new observations from the world. Rodney argues that this new approach have a very different decomposition compared to the traditional sequential information flow. In the subsumption architecture, each layer itself connects from sensing to action. Higher layers may rely on lower layers, but does not call lower layers as subroutines. Several robots were built based on this architecture, including robot Allen that can move to a goal while avoiding obstacles, robot Herbert that can pick up soda cans, insect like robot Genghis, etc.

These work were quite influential and provided a very different perspective on how to approach AI. Unlike other robots at that time, robots under the subsumption architecture can react in real-time in a human environment. Rodney went on to promote this concept and published a series of papers (with some of the best titles) such as “Planning is just a way of avoiding figuring out what to do next” and “Elephants don’t play chess.” Two crucial ideas were emphasized in these papers. 1) Situatedness: The robots should not deal with abstract descriptions, but with the environment that directly influences the robot and 2) Embodiment: The robots should experience the world directly so that their actions have immediate feedback on the robots’ own sensations. These are the central ideas that led to behavior-based solutions.

Today, computers are much faster and robots now are capable of running the good old fashion sense-model-plan-act sequence close to if not yet in real-time. Model heavy approaches such as physics-based approaches were one of the most popular topics and planning algorithms are ubiquitous among robot arms and self-driving cars. So is behavior-based robotics still relevant in 2019? Some of the concept still exists in many robots, but in a more hybrid fashion, such as having a lower level loop that allows the robot to react faster under a high level AI planning layer. Although behavior-based robotics is not mentioned as often nowadays, I am pretty sure we will revisit it when the sense-model-plan-act approach fails again.

References:

  • Brooks, Rodney A. “New approaches to robotics.” Science253, no. 5025 (1991): 1227-1232.
  • Brooks, Rodney A. “Elephants don’t play chess.” Robotics and autonomous systems 6, no. 1-2 (1990): 3-15.
  • Brooks, Rodney A. “Planning is just a way of avoiding figuring out what to do next.” (1987).
  • Talking Robots Podcast with Rodney Brooks
  • Wikipedia: subsumption architecture