DL Vs. ML
Machine Learning and Deep Learning are subsets of AI that rely on datasets to train models and improve their accuracy through pattern recognition,
But what makes them different?
Machine Learning
Allows a system to learn and improve independently. Given so, systems don’t need to be explicitly programmed because ML algorithms can recognize patterns and data as well as make predictions everytime new data is entered into the system.
These are are four types of models that go into Machine Learning:
Supervised Learning: The model is trained with label training data which means that it learns from being exposed to known inputs ( labeled data ).
Unsupervised Learning: The model is not spoon fed with labeled data to find patterns, instead it learns from non-human data inputs. In order to find patterns it learns on it’s own by dividing data into groups that are detected to have similar features.
Semi-Supervised Learning: A technique that combines small amounts of labeled data with a large amount of unlabeled data to train a model.
Reinforcement Learning: Models learn from going through a series of trial and error experiments also known as “learn by doing” a concept humans can relate to. In this sequence the algorithm isn’t given a set of labeled data or is given the output expectations. The model learns with positive and negative feedback that helps catch on to behavioral patterns to understand how to learn to do something.
Deep Learning
Algorithms in deep learning mirrors the human brain in how it analysis data using logic. It uses artificial neural networks (ANN) to process and analyze information. To learn more about DL you can visit the previous post.
Deep learning allows computers to learn from large amounts of data and recognize complex patterns that would be difficult for humans or traditional machine learning models.
The following are the 5 types of Neural Network used in deep learning:
Feedforward neural networks (FF): Data moves forward in straight line from the input layer through one or more hidden layers untill it gets to the output layer.
Recurrent neural networks (RNN): A designed process that uses a “memory”" mechanism to store and process information from previous input layers.. This allows the machine to learn long-term dependencies within the sequence, leading to more accurate predictions and deeper understanding of patterns.
Long/short term memory (LSTM): Retains information over long periods of time. It does it by using special memory cells and gates to control the flow of information, helping it avoid forgetting important details.
Convolutional neural networks (CNN): It at processing visual data. It stacks a series of convolutional layers, each applying filters to extract specific features like edges, textures, and shapes.
Generative adversarial networks (GAN): consists of two components: a generator that creates fake data, and a discriminator that tries to distinguish between real and fake data.
In summary, ML uses statistical methods to learn patterns from data, often requiring human intervention. While DL employs artificial neural networks inspired by the human brain, enabling them to learn complex patterns directly from raw data, minimizing the need of human intervention.