Life is a game, take it seriously

Posts Tagged ‘optimization’

Talk the Talk: Optimization’s Untold Gift to Learning

In AI, Computer Vision, deep learning, Machine Learning on October 13, 2019 at 10:40 am

by Li Yang Ku (Gooly)

deep learning optimization

In this post I am going to talk about a fascinating talk by Nati Srebro at ICML this June. Srebro have given similar talks at many places but I think he really nailed it this time. This talk is interesting not only because he provided a different view of the role of optimization in deep learning but also because he clearly explained why many researcher’s argument on the reason that deep learning works doesn’t make sense.

Srebro first look into what we know about deep learning (typical feed forward network) based on three questions. The first question is regarding the capacity of the network. How many samples do we need to learn certain network architecture? The short answer is that it should be proportional to the number of parameters in the network, which is the total number of edges. The second question is about the expressiveness of the network. What can we express with certain model class? What type of questions can we learn? Since a two layer neural network is a universal approximator, it can learn any continuous function, this is however not a very useful information since it may require an exponentially large network and exponential amount of samples to learn. So the more interesting question is what can we express with a reasonable sized network? Many recent research more or less focuses on this question. However, Srebro argues that since there is another theory that says any function that can be executed within a reasonable amount of time can be captured by a network of reasonable size (please comment below if you know what theory this is), all problems that we expect to be solvable can be expressed by a reasonable sized network.

The third question is about computation. How hard is it to find optimal parameters? The bad news is that finding the weights for even tiny networks is NP-Hard. Theories (link1 link2) show that even if the training data can be perfectly expressed by a small neural network there are no polynomial time algorithm to find such set of weights. This means that neural network’s expressiveness described in question 2 doesn’t really do much good since we aren’t capable of finding the optimal solution. But we all know that in reality neural network works pretty well, it seems that there are some magical property that allows us to learn neural networks. Srebro emphasizes that we still don’t know what is the magical property that makes neural networks learnable, but we do know it is not because we can represent the data well with the network. If you ask vision folks why neural networks work, they might say something like the lower layers of the network matches low level visual features and the higher layers match higher level visual features. However, this answer is about the expressiveness of the network described in question 2 which is not sufficient for explaining why neural networks work and provides zero evidence since we already know neural networks have the power to express any problem.

Srebro then talked about the observed behavior that neural networks usually don’t overfit to the training data. This is an unexpected property quite similar to the behavior of Adaboost, which was invented in 1997 and quite popular in the 2000s. It was only after the invention that people discovered that the reason Adaboost doesn’t overfit is because it is implicitly minimizing the L-1 norm that limits the complexity. So the question Srebro pointed out was whether the gradient decent algorithm for learning neural networks are also implicitly minimizing certain complexity measure that would be beneficial in reaching a solution that would generalize. Given a set of training data, a neural network can have multiple optimal solutions that are global minima (zero training error). However, some of these global minima perform better than the others on the test data. Srebro argues that the optimization algorithm might be doing something else other than just minimizing the training error. Therefore, by changing the optimization algorithm we might observe a difference in how well can a neural network generalize to test data, and this is exactly what Srebro’s group discovered. In one experiment they showed that even though using Adam optimization achieves lower training error then stochastic gradient decent, it actually performs worse on the test data. What this means is that we might not be putting enough emphasize on optimization in the deep learning community where a typical paper looks like the following:

Deep Learning Paper TemplateThe contributions are on the model and loss function, while the optimization is just a brief mention. So the main point Srebro is trying to convey is that different optimization algorithms would lead to different inductive biases, and different inductive biases would lead to different generalization properties. “We need to understand optimization algorithm not just as reaching some global optimum, but as reaching a specific optimum.”

Srebro further talked about a few more works based on these observations. If you are interested by now, you should probably watch the whole video (You would need to fast forward a bit to start.) I am however going to put in a little bit of my own thoughts here. Srebro emphasizes the importance of optimization a lot in this talk and said the deep models we use now can basically express any problem we have, therefore the model is not what makes deep learning work. However, we also know that the model does matter based on claims of many papers that invented new model architectures. So how could both of these claims be true? We have to remember that the model architecture is also part of the optimization process that shapes the geometry which the optimization algorithm is optimizing on. Hence, if the nerual network model provides a landscape that allows the optimization algorithm to reach a desired minimum more easily, it will also generalize better to the test data. In other words, the model and the optimization algorithm have to work together.