Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.

However, there remain a number of concerns about them. One is that it can be quite challenging to understand *what* a neural network is really doing. If one trains it well, it achieves high quality results, but it is challenging to understand how it is doing so. If the network fails, it is hard to understand what went wrong.

While it is challenging to understand the behavior of deep neural networks in general, it turns out to be much easier to explore low-dimensional deep neural networks – networks that only have a few neurons in each layer. In fact, we can create visualizations to completely understand the behavior and training of such networks. This perspective will allow us to gain deeper intuition about the behavior of neural networks and observe a connection linking neural networks to an area of mathematics called topology.

A number of interesting things follow from this, including fundamental lower-bounds on the complexity of a neural network capable of classifying certain datasets.

**Read more on my new blog!**

### Like this:

Like Loading...

*Related*

Tags: deep learning, manifold hypothesis, neural networks, topology

This entry was posted on April 9, 2014 at 16:03 and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

April 10, 2014 at 02:15 |

Would you mind adding an Atom or RSS feed to your new blog?

April 10, 2014 at 03:34 |

I will soon! I wasn’t anticipating so much interest.

In the mean time, you can subscribe to this blog’s RSS feed and I will cross-post everything.

July 8, 2014 at 01:50 |

You can now subscribe to an RSS feed: http://colah.github.io/rss.xml