In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way.
But despite the results, we have to wonder… why do they work so well?
This post reviews some extremely remarkable results in applying deep neural networks to natural language processing (NLP). In doing so, I hope to make accessible one promising answer as to why deep neural networks work. I think it’s a very elegant perspective.
Read the full post on my new blog!
Tags: deep learning, neural networks, NLP, recursive neural networks, representations
August 13, 2014 at 10:09 |
Do you have a version in which all the math markup renders properly? Like, instead of “\(R\)”, it says just “R” in math mode (perhaps
).
August 14, 2014 at 02:31 |
The linked post should render properly with MathJax.
August 14, 2014 at 09:14 |
Ah it does, sorry: because of an extension I was visiting https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/ (with https instead of http), at which the MathJax didn’t get loaded (as insecure).
Fantastic post, BTW!