John D. Kelleher did an amazing job with Deep Learning from The MIT Press Essential Knowledge Series. This tiny little book contains all the essential information about the topic, from the brief history of deep learning (née connectionism) through various activation functions to backprop and gradient descent.
It is about the main ideas of deep learning articulated in a way that is comprehensible for a broad audience. So it doesn’t deal with how to implement neural networks programmatically, but it focuses on the theory, which means a fair amount of maths. Although the mathematically less inclined can simply skip through the equations and the mathematical explanations of the book and hence can get an intuitive understanding of the field, this reading strategy misses the strength of the book i.e. it illuminates the mathematical and algorithmic ingredients of deep learning. Instead of skipping through the demanding parts of the book, one can easily find online resources (e.g. Khan Academy) to refresh his knowledge of high school calculus and algebra to make the most out of this fantastic book. Go for it, it’s never been easier to learn deep learning!
The header image has been downloaded from this link of The Internet Encyclopedia of Philosophy.
Subscribe to our newsletter
Get highlights on NLP, AI, and applied cognitive science straight into your inbox.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.