Life is a game, take it seriously

Looking Into Neuron Activities: Light Controlled Mice and Crystal Skulls

In brain, Neural Science, Paper Talk, Serious Stuffs on April 2, 2017 at 9:50 pm

by Li Yang Ku (Gooly)

It might feel like there aren’t that much progress in brain theories recently, we still know very little about how signals are processed in our brain. However, scientists have moved away from sticking electrical probes into cat brains and became quite creative on monitoring brain activities.

Optogenetics techniques, which was first tested in early 2000, allow researchers to activate a neuron in a live brain by light. By controlling the light that activates motor neurons in a mouse, scientists can control its movement remotely, therefore creating a “remote controlled mouse” which you might heard of in some not that popular sci-fi novels. This is achieved by taking the DNA segment of an algae that produces light sensitive proteins and insert it into a specific brain neuron of the mouse using viral vectors. When light is shed on this protein, it opens its ion channel and activates the neuron. The result is pretty cool, but not as precise as your remote control car, yet. (see video below)

Besides the Optogenetics techniques that are used to understand the function of a neuron by actively triggering it, methods for monitoring neuron activities directly have also become quite exciting, such as using genetically modified mice with brain neurons that glow when activated. These approaches that use fluorescent markers to monitor the level of calcium in the cell can be traced back to the green fluorescent proteins introduced by Chalfie etc in 1994. With fluorescent indicators that binds with calcium, researcher can actually see brain activities the first time. A lot of progress have been made on improving these markers since; in 2007 a group in Harvard introduced the “Brainbow” that can generate up to 90 different fluorescent colors. This allowed scientists to identify neuron connection a lot easier and also helped them won a few photo contests.

To better observe these fluorescent protein sensors (calcium imaging), a recent publication in 2016 further introduced the “crystal skull”, an approach that replaces the top skull of a genetically modified mouse with a curved glass. This quite fancy approach allows researchers to monitor half a million brain neuron activities of a live mouse through mounting a fluorescence macroscope on top of the crystal skull.

References:

Chalfie, Martin. “Green fluorescent protein as a marker for gene expression.” Trends in Genetics 10.5 (1994): 151.

Madisen, Linda, et al. “Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance.” Neuron 85.5 (2015): 942-958.

Josh Huang, Z., and Hongkui Zeng. “Genetic approaches to neural circuits in the mouse.” Annual review of neuroscience 36 (2013): 183-215.

Kim, Tony Hyun, et al. “Long-Term Optical Access to an Estimated One Million Neurons in the Live Mouse Cortex.” Cell Reports 17.12 (2016): 3385-3394.

 

Generative Adversarial Nets: Your Enemy is Your Best Friend?

In Computer Vision, deep learning, Machine Learning, Paper Talk on March 20, 2017 at 7:10 pm

by Li Yang Ku (gooly)

Generating realistic images with machines was always one of the top items on my list of difficult tasks. Past attempts in the Computer Vision community were only able to get a blurry image at best. The well publicized Google Deepdream project was able to generate some interesting artsy images, however they were modified from existing images and were designed more to make you feel like on drugs then realistic. Recently (2016), a work that combines the generative adversarial network framework with convolutional neural networks (CNNs) generated some results that look surprisingly good. (A non vision person would likely not be amazed though.) This approach was quickly accepted by the community and was referenced more then 200 times in less then a year.

This work is based on an interesting concept first introduced by Goodfellow et al. in the paper “Generative Adversarial Nets” at NIPS 2014 (http://papers.nips.cc/paper/5423-generative-adversarial-nets). The idea was to have two neural networks compete with each other. One would try to generate images as realistic as it can and the other network would try to distinguish them from real images at its best. By theory this competition will reach a global optimum where the generated image and the real image will belong to the same distribution (Could be a lot trickier in practice though). This work in 2014 got some pretty good results on digits and faces but the generated natural images are still quite blurry (see figure above).

In the more recent work “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks” by Radford, Metz, and Chintala, convolutional neural networks and the generative adversarial net framework are successfully combined with a few techniques that help stabilize the training (https://arxiv.org/abs/1511.06434). Through this approach, the generated images are sharp and surprisingly realistic at first glance. The figures above are some of the generated bedroom images. Notice that if you look closer some of them may be weird.

The authors further explored what the latent variables represents. Ideally the generator (neural network that generates image) should disentangle independent features and each latent variable should represent a meaningful concept. By modifying these variables, images that have different characteristics can be generated. Note that these latent variables are what given to the neural network that generates images and is randomly sampled from a uniform distribution in the previous examples. In the figure above is an example where the authors show that the latent variables do represent meaningful concepts through arithmetic operations. If you subtract the average latent variables of men without glasses from the average latent variables of men with glasses and add the average latent variables of women without glasses, you obtain a latent variable that result in women with glasses when passed through the generator. This process identifies the latent variables that represent glasses.

 

 

 

Convolutional Neural Network Features for Robot Manipulation

In Computer Vision, deep learning, Robotics on October 24, 2016 at 6:30 am

by Li Yang Ku (Gooly)

bender_turtle

In my previous post, I mentioned the obstacles when applying deep learning techniques directly to robotics. First, training data is harder to acquire; Second, interacting with the world is not just a classification problem. In this post, I am gonna talk about a really simple approach that treats convolutional neural networks (CNNs) as a feature extractor that generates a set of features similar to traditional features such as SIFT. This idea is applied to grasping on Robonaut 2 and published in arXiv (Associating Grasp Configurations with Hierarchical Features in Convolutional Neural Networks) with more details. The ROS package called ros-deep-vision that generates such features using a RGB-D sensor is also public.

Hierarchical CNN Features

 

When we look at these deep models such as CNNs, we should keep in mind that these models work well because how the layers stack up hierarchically matches how the data is structured. Our observed world is also hierarchical, there are common shared structures such as edges that can be used to represent more complex structures such as squares and cubes when combined in meaningful ways. A simple view of CNN is just a tree structure, where a higher level neuron is a combination of neurons in the previous layer. For example, a neuron that represents cuboids is a combination of neurons that represent the corners and edges of the cuboid. The figures above show such examples on neurons that found to activate consistently on cuboids and cylinders.

Deep Learning for Robotics

By taking advantage of this hierarchical nature of CNN, we can turn a CNN into a feature extractor that generates features that represents local structures of a higher level structure. For example, such hierarchical features can represent the left edge of the top face of a box while traditional edge detectors would find all edges in the scene. Instead of representing a feature with a single filter (neuron) in one of the CNN layers, this feature, which we call hierarchical CNN feature, uses a tuple of filters from different layers. Using backpropagation that restricts activation to one filter per layer allows us to locate the location of such feature precisely. By finding features such as the front and back edge of the top face of a box we can learn where to place robot fingers relative to these hierarchical CNN features in order to manipulate the object.

robonaut 2 grasping