An Introduction to Deep Learning

 

An Introduction to Deep Learning

During recent years, deep learning has become somewhat of a buzzword in the tech community. We always seem to hear about it in news regarding AI, and yet most people don’t actually know what it is! In this article, I’ll be demystifying the buzzword that is deep learning, and providing an intuition of how it works.

Building the Intuition

Generally speaking, deep learning is a machine learning method that takes in an input X, and uses it to predict an output of Y. As an example, given the stock prices of the past week as input, my deep learning algorithm will try to predict the stock price of the next day.

Given a large dataset of input and output pairs, a deep learning algorithm will try to minimize the difference between its prediction and expected output. By doing this, it tries to learn the association/pattern between given inputs and outputs — this in turn allows a deep learning model to generalize to inputs that it hasn’t seen before.

As another example, let’s say that inputs are images of dogs and cats, and outputs are labels for those images (i.e. is the input picture a dog or a cat). If an input has a label of a dog, but the deep learning algorithm predicts a cat, then my deep learning algorithm will learn that the features of my given image (e.g. sharp teeth, facial features) are going to be associated with a dog.

How Do Deep Learning algorithms “learn”?

Deep Learning Algorithms use something called a neural network to find associations between a set of inputs and outputs. The basic structure is seen below:

A neural network is composed of input, hidden, and output layers — all of which are composed of “nodes”. Input layers take in a numerical representation of data (e.g. images with pixel specs), output layers output predictions, while hidden layers are correlated with most of the computation.

I won’t go too in depth into the math, but information is passed between network layers through the function shown above. The major points to keep note of here are the tunable weight and bias parameters — represented by w and b respectively in the function above. These are essential to the actual “learning” process of a deep learning algorithm.

After the neural network passes its inputs all the way to its outputs, the network evaluates how good its prediction was (relative to the expected output) through something called a loss function. As an example, the “Mean Squared Error” loss function is shown below.

Y hat represents the prediction, while Y represents the expected output. A mean is used if batches of inputs and outputs are used simultaneously (n represents sample count)

The goal of my network is ultimately to minimize this loss by adjusting the weights and biases of the network. In using something called “back propagation” through gradient descent, the network backtracks through all its layers to update the weights and biases of every node in the opposite direction of the loss function — in other words, every iteration of back propagation should result in a smaller loss function than before.

Without going into the proof, the continuous updates of the weights and biases of the network ultimately turns it into a precise function approximator — one that models the relationship between inputs and expected outputs.

So why is it called “Deep” Learning?

The “deep” part of deep learning refers to creating deep neural networks. This refers a neural network with a large amount of layers — with the addition of more weights and biases, the neural network improves its ability to approximate more complex functions.

Conclusions and Takeaways

Deep learning is ultimately an expansive field, and is far more complex than I’ve described it to be. Various types of neural networks exist for different tasks (e.g. Convolutional NN for computer vision, Recurrent NN for NLP), and go far and beyond the basic neural network that I’ve covered.

Above: A Convolutional Neural Network

Even if you don’t remember everything from this article, here are a few takeaways:

  • Deep Learning refers to Deep Neural Networks
  • Deep Neural Networks find associations between a set of inputs and outputs
  • Back propagation is something that’s used to update the parameters of a Neural Network

The implications of deep learning are insane. While I gave fairly simple application examples such as image classification and stock price prediction, there’s ultimately so much more! Video synthesis, self driving cars, human level game AI, and more — all of these came from deep learning. If you’re interested in learning more, I wrote an article about using Deep Reinforcement Learning to play doom — click the link below to check it out!

Doom with Deep Reinforcement Learning

Comments

Popular posts from this blog

ಹಿಂದೂಧರ್ಮದ ಕರೆ - ಸ್ವಾಮಿ ವಿವೇಕಾನಂದ

How to be happy in these days?