Deep Learning – Part 1 – Intuition


Deep Learning is getting popular in Machine Learning space. There are quite a bit of materials on the Net talked about it. After spending tons of time going through them, I came across some great ones that gave me an aha moment. I will list them out into series so you can go through them in proper sequence.

Intuition

The short video (~20 minutes) below created by Brandon Rohrer is by far the best I found on the Net. The video goes over the following points:

  • An general overview of what is Deep Learning
  • A great explanation of how we model our neural network based on our understanding of biological neuron.
  • Introduces you the concept of backpropagation that is a technique to train the neural net by updating weight via gradient descent. This technique provides us an efficient way to identify the weight combinations in the neural net that minimizes the error according to the training samples. Technically, it is used to find the minimum point of the cost function of error.

NOTE: The video hasn’t gone over any math. If you are looking for this, it may not meet your need.

After you get an idea, you can go over another video from Brandon that further explains how backpropagation works. The video goes over the following points:

  • Introduce Sigmoid function and ReLU unit.
  • Sigmoid function is used as activation function that determines if a signal should carry down to the next layer or not. The output of neuron after sigmoid function should be between 0 to 1 and there should be a threshold that determines if the signal to carry down to the next level or not. According to the formula of sigmoid function: \(\frac{1}{1+e^{-y}}\), its output value should be between 0 to 1. So, I am not quite sure why the example saying it is between -1 and 1. For the reason why we use sigmoid function as activation function in neural netowrk, you can see the answer from quora here.

  • ReLU unit will ignore the -ve output via giving it a 0 value.
  • Further explain how backpropagation works
  • Show you why gradient descent can dramatically reduce the time to identify the optimal point for the cost function.

What I like about the video is the example he used to illustrate you how layers function together in the neural net to solve a problem. Often time, we just define # of input and output according to the problem we are trying the solve. The hidden layer depth and # of neurons on each layer are often time not needed to be in exact and less likely you can reason as the example he used. So, his illustration is great to give us an intuition. However, it is also a bit oversimplified if not understate the magic of deep neural network.

From Multivariate Linear Regression to Deep Learning

In this video, Brais Martinez helped us how to model our problem into deep learning. He started out modeling the house price prediction as multivariate linear regression problem. Then, he helped us to extend it into deep learning. I found it useful for me.

The post with the videos above hopefully gives you a good overview of neural network. Next post in the series, I will show you how to implement a simple neural network using Python.

References


Deep Learning – Part 1 – Intuition

log in

Use demo/demo public access

reset password

Back to
log in
Choose A Format
Personality quiz
Trivia quiz
Poll
Story
List
Meme
Video
Audio
Image