There is a lot talking going on about neural networks, deep learning, artifical intelligence. When people talk about neural networks I sometimes get the impression, that they think it’s
- either some almost lifelike thing : „it’s like the brain!“
- or that it’s really complicated math stuff, which no „normal“ person can understand.
Well first things first: it’s nothing lifelike at all.
It is actually just math: just adding and multiplying numbers. And it really looks complicated. But if you bite through all this math formulas, some books and some online video tutorials you get the impression, that the basics are not some kind of „higher math“ – it’s just complicated because you have a lot of numbers to calculate with.
If you have a bunch of numbers on a page, you can get really confused, what to multiply and add with what and where. To learn this stuff and to understand it sustainably(!) I needed some simple examples with which I can repeat this stuff again and again. And I learn best, if I explain it to someone else – therefore I wanted to create a presentation about neural networks.
A couple of month ago I stumbled over Trask’s article in which he wrote a neural network in just 11 lines of code.1 That really helped me a lot in doing my slides. I reused his code to calculate the numbers you find in this article. I plan to publish my IPython notebook corresponding to this article as well. It’s just not ready yet :-)
So, have fun!
And yeah it reeeeally looks complicated in math :-)

Andrew Ng explains neural network backpropagation in the coursera machine learning MOOC
But do not get me wrong: Andrew Ng is a really good and really pleasant teacher and I really like his course. I still look at it form time to time, if something is unclear to me (again :-). If you had a view at my slides and still want to know more about neural networks, I highly recommend his course. The math formulas do really have their rights here, because there is no better language to encapsulate the whole calculation process in just some tiny symbols. Understanding the formulas makes it much easier to talk about changes in the calculations in different implementations of neural networks. And in the course there is more stuff about neural networks. For example how to avoid errors like the one on my last screens.
- In case you wonder, how you can calculate so many numbers in just 11 lines of code: That’s another story about matrix multiplication, which is just like a „writing simplification“ of doing all the equations manually – if you need to refresh your school knowledge about matrix multiplication, I highly recommend the Khan-Academy. [↩]