Deep Learning A-Z 2026: Neural Networks, AI & ChatGPT Prize
-
Welcome to the course!
-
———-Part 1 – Artificial Neural Networks———-
-
ANN IntuitionWhat You’ll Need for ANNHow Neural Networks Learn: Gradient Descent and Backpropagation Explained0sUnderstanding Neurons: The Building Blocks of Artificial Neural Networks0sUnderstanding Activation Functions in Neural Networks: Sigmoid, ReLU, and More0sHow Do Neural Networks Work? Step-by-Step Guide to Property Valuation Example0sHow Do Neural Networks Learn? Understanding Backpropagation and Cost Functions0sMastering Gradient Descent: Key to Efficient Neural Network Training0sHow to Use Stochastic Gradient Descent for Deep Learning Optimization0sUnderstanding Backpropagation Algorithm: Key to Optimizing Deep Learning Models0s
-
Building an ANNStep 1 – Data Preprocessing for Deep Learning: Preparing Neural Network Dataset0sCheck out our free course on ANN for RegressionStep 2 – Data Preprocessing for Neural Networks: Essential Steps and Techniques0sStep 3 – Constructing an Artificial Neural Network: Adding Input & Hidden Layers0sStep 4 – Compile and Train Neural Network: Optimizers, Loss Functions & Metrics0sStep 5 – How to Make Predictions and Evaluate Neural Network Model in Python0s
-
———-Part 2 – Convolutional Neural Networks———-
-
CNN IntuitionWhat You’ll Need for CNNUnderstanding CNN Architecture: From Convolution to Fully Connected Layers0sHow Do Convolutional Neural Networks Work? Understanding CNN Architecture0sHow to Apply Convolution Filters in Neural Networks: Feature Detection Explained0sRectified Linear Units (ReLU) in Deep Learning: Optimizing CNN Performance0sUnderstanding Spatial Invariance in CNNs: Max Pooling Explained for Beginners0sHow to Flatten Pooled Feature Maps in Convolutional Neural Networks (CNNs)0sHow Do Fully Connected Layers Work in Convolutional Neural Networks (CNNs)?0sCNN Building Blocks: Feature Maps, ReLU, Pooling, and Fully Connected Layers0sUnderstanding Softmax Activation and Cross-Entropy Loss in Deep Learning0s
-
Building a CNNStep 1 – Convolutional Neural Networks Explained: Image Classification Tutorial0sStep 2 – Deep Learning Preprocessing: Scaling & Transforming Images for CNNs0sStep 3 – Building CNN Architecture: Convolutional Layers & Max Pooling Explained0sStep 4 – Train CNN for Image Classification: Optimize with Keras & TensorFlow0sStep 5 – Deploying a CNN for Real-World Image Recognition0sDevelop an Image Recognition System Using Convolutional Neural Networks0s
-
———Part 3 – Recurrent Neural Networks———-
-
RNN IntuitionWhat You’ll Need for RNNHow Do Recurrent Neural Networks (RNNs) Work? Deep Learning Explained0sWhat is a Recurrent Neural Network (RNN)? Deep Learning for Sequential Data0sUnderstanding the Vanishing Gradient Problem in Recurrent Neural Networks (RNNs)0sUnderstanding Long Short-Term Memory (LSTM) Architecture for Deep Learning0sHow LSTMs Work in Practice: Visualizing Neural Network Predictions0sLSTM Variations: Peepholes, Combined Gates, and GRUs in Deep Learning0s
-
Building a RNNStep 1 – Building a Robust LSTM Neural Network for Stock Price Trend Prediction0sStep 2 – Importing Training Data for LSTM Stock Price Prediction Model0sStep 3 – Applying Min-Max Normalization for Time Series Data in Neural Networks0sStep 4 – Building X_train and y_train Arrays for LSTM Time Series Forecasting0sStep 5 – Preparing Time Series Data for LSTM Neural Network in Stock Forecasting0sStep 6 – Create RNN Architecture: Sequential Layers vs Computational Graphs0sStep 7 – Adding First LSTM Layer: Key Components for Stock Market Prediction0sStep 8 – Implementing Dropout Regularization in LSTM Networks for Forecasting0sStep 9 – Finalizing RNN Architecture: Dense Layer for Stock Price Forecasting0sStep 10 – Compile RNN with Adam Optimizer for Stock Price Prediction in Python0sStep 11 – Optimizing Epochs and Batch Size for LSTM Stock Price Forecasting0sStep 12 – Visualizing LSTM Predictions: Real vs Forecasted Google Stock Prices0sStep 13 – Preparing Historical Stock Data for LSTM Model: Scaling and Reshaping0sStep 14 – Creating 3D Input Structure for LSTM Stock Price Prediction in Python0sStep 15 – Visualizing LSTM Predictions: Plotting Real vs Predicted Stock Prices0s
-
———-Part 4 – Self Organizing Maps———-
-
SOMs IntuitionHow Do Self-Organizing Maps Work? Understanding SOM in Deep Learning0sSelf-Organizing Maps (SOM): Unsupervised Deep Learning for Dimensionality Reduct0sWhy K-Means Clustering is Essential for Understanding Self-Organizing Maps0sSelf-Organizing Maps Tutorial: Dimensionality Reduction in Machine Learning0sHow Self-Organizing Maps (SOMs) Learn: Unsupervised Deep Learning Explained0sHow to Create a Self-Organizing Maps (SOMs) in DL: Step-by-Step Tutorial0sInterpreting SOM Clusters: Unsupervised Learning Techniques for Data Analysis0sUnderstanding K-Means Clustering: Intuitive Explanation with Visual Examples0sK-Means Clustering: Avoiding the Random Initialization Trap in Machine Learning0sHow to Find the Optimal Number of Clusters in K-Means: WCSS and Elbow Method0s
-
Building a SOMStep 1 – Implementing Self-Organizing Maps (SOMs) for Fraud Detection in Python0sStep 2 – SOM Weight Initialization and Training: Tutorial for Anomaly Detection0sStep 3 – SOM Visualization Techniques: Colorbar & Markers for Outlier Detection0sStep 4 – Catching Cheaters with SOMs: Mapping Winning Nodes to Customer Data0s
-
Mega Case StudyStep 1 – Building a Hybrid Deep Learning Model for Credit Card Fraud Detection0sStep 2 – Developing a Fraud Detection System Using Self-Organizing Maps0sStep 3 – Building a Hybrid Model: From Unsupervised to Supervised Deep Learning0sStep 4 – Implementing Fraud Detection with SOM: A Deep Learning Approach0s
-
———-Part 5 – Boltzmann Machines———-
-
Boltzmann Machine IntuitionUnderstanding Boltzmann Machines: Deep Learning Fundamentals for AI Enthusiasts0sBoltzmann Machines vs. Neural Networks: Key Differences in Deep Learning0sDeep Learning Fundamentals: Energy-Based Models & Their Role in Neural Networks0sHow to Edit Wikipedia: Adding Boltzmann Distribution in Deep Learning0sHow Restricted Boltzmann Machines Work: Deep Learning for Recommender Systems0sHow Energy-Based Models Work: Deep Dive into Contrastive Divergence Algorithm0sDeep Belief Networks: Understanding RBM Stacking in Deep Learning Models0sDeep Boltzmann Machines vs Deep Belief Networks: Key Differences Explained0s
-
Building a Boltzmann MachineStep 0 – Building a Movie Recommender System with RBMs: Data Preprocessing Guide0sSame Data Preprocessing in Parts 5 and 6Step 1 – Importing Movie Datasets for RBM-Based Recommender Systems in Python0sStep 2 – Preparing Training and Test Sets for Restricted Boltzmann Machine0sStep 3 – Preparing Data for RBM: Calculating Total Users and Movies in Python0sStep 4 – Convert Training & Test Sets to RBM-Ready Arrays in Python0sStep 5 – Converting NumPy Arrays to PyTorch Tensors for Deep Learning Models0sStep 6 – RBM Data Preprocessing: Transforming Movie Ratings for Neural Networks0sStep 7 – Implementing Restricted Boltzmann Machine Class Structure in PyTorch0sStep 8 – RBM Hidden Layer Sampling: Bernoulli Distribution in PyTorch Tutorial0sStep 9 – RBM Visible Node Sampling: Bernoulli Distribution in Deep Learning0sStep 10 – RBM Training Function: Updating Weights and Biases with Gibbs Sampling0sStep 11 – How to Set Up an RBM Model: Choosing NV, NH, and Batch Size Parameters0sStep 12 – RBM Training Loop: Epoch Setup and Loss Function Implementation0sStep 13 – RBM Training: Updating Weights and Biases with Contrastive Divergence0sStep 14 – Optimizing RBM Models: From Training to Test Set Performance Analysis0sEvaluating the Boltzmann Machine
-
———-Part 6 – AutoEncoders———-
-
AutoEncoders IntuitionDeep Learning Autoencoders: Types, Architecture, and Training Explained0sAutoencoders in Machine Learning: Applications and Architecture Overview0sAutoencoder Bias in Deep Learning: Improving Neural Network Performance0sHow to Train an Autoencoder: Step-by-Step Guide for Deep Learning Beginners0sHow to Use Overcomplete Hidden Layers in Autoencoders for Feature Extraction0sSparse Autoencoders in Deep Learning: Preventing Overfitting in Neural Networks0sDenoising Autoencoders: Deep Learning Regularization Technique Explained0sWhat are Contractive Autoencoders? Deep Learning Regularization Techniques0sWhat are Stacked Autoencoders in Deep Learning? Architecture and Applications0sDeep Autoencoders vs Stacked Autoencoders: Key Differences in Neural Networks0s
-
Building an AutoEncoderGet the code and dataset readySame Data Preprocessing in Parts 5 and 6Step 1 – Building a Movie Recommendation System with AutoEncoders: Data Import0sStep 2 – Preparing Training and Test Sets for Autoencoder Recommendation System0sStep 3 – Preparing Data for Recommendation Systems: User & Movie Count in Python0sHomework Challenge – Coding ExerciseStep 4 – Prepare Data for Autoencoder: Creating User-Movie Rating Matrices0sStep 5 – Convert Training and Test Sets to PyTorch Tensors for Deep Learning0sStep 6 – Building Autoencoder Architecture: Class Creation for Neural Networks0sStep 7 – Python Autoencoder Tutorial: Implementing Activation Functions & Layers0sStep 8 – PyTorch Techniques for Efficient Autoencoder Training on Large Datasets0sStep 9 – Implementing Stochastic Gradient Descent in Autoencoder Architecture0sStep 10 – Machine Learning Metrics: Interpreting Loss in Autoencoder Training0sStep 11 – How to Evaluate Recommender System Performance Using Test Set Loss0sTHANK YOU Video0s
-
———-Annex – Get the Machine Learning Basics———-
-
Regression & Classification IntuitionWhat You Need for Regression & ClassificationSimple Linear Regression: Understanding Y = B0 + B1X in Machine Learning0sLinear Regression Explained: Finding the Best Fitting Line for Data Analysis0sMultiple Linear Regression – Understanding Dependent & Independent Variables0sUnderstanding Logistic Regression: Intuition and Probability in Classification0s
-
Data Preprocessing
-
Data Preprocessing in PythonStep 1 – Data Preprocessing in Python: Essential Tools for ML Models0sStep 2 – How to Handle Missing Data in Python: Data Preprocessing Techniques0sStep 1 – Importing Essential Python Libraries for Data Preprocessing & Analysis0sStep 1 – Creating a DataFrame from CSV: Python Data Preprocessing Basics0sStep 2 – Pandas DataFrame Indexing: Building Feature Matrix X with iloc Method0sStep 3 – Preprocessing Data: Extracting Features and Target Variables in Python0sFor Python learners, summary of Object-oriented programming: classes & objectsStep 1 – Handling Missing Data in Python: SimpleImputer for Data Preprocessing0sStep 2 – Preprocessing Datasets: Fit and Transform to Handle Missing Values0sStep 1 – Preprocessing Categorical Variables: One-Hot Encoding in Python0sStep 2 – Using fit_transform Method for Efficient Data Preprocessing in Python0sStep 3 – Preprocessing Categorical Data: One-Hot and Label Encoding Techniques0sStep 1 – Machine Learning Data Prep: Splitting Dataset Before Feature Scaling0sStep 2 – Split Data into Train & Test Sets with Scikit-learn’s train_test_split0sStep 3 – Preparing Data for ML: Splitting Datasets with Python and Scikit-learn0sStep 1 – How to Apply Feature Scaling for Preprocessing Machine Learning Data0sStep 2 – Feature Scaling in Machine Learning: When to Apply StandardScaler0sStep 3 – Normalizing Data with Fit and Transform Methods in Scikit-learn0sStep 4 – How to Apply Feature Scaling to Training & Test Sets in ML0s
-
Logistic RegressionUnderstanding the Logistic Regression Equation: A Step-by-Step Guide0sHow to Calculate Maximum Likelihood in Logistic Regression: Step-by-Step Guide0sStep 1a – Machine Learning Classification: Logistic Regression in Python0sStep 1b – Logistic Regression Analysis: Importing Libraries and Splitting Data0sStep 2a – Data Preprocessing for Logistic Regression: Importing and Splitting0sStep 2b – Data Preprocessing: Feature Scaling for Machine Learning in Python0sStep 3a – Implementing Logistic Regression for Classification with Scikit-Learn0sStep 3b – Predicting Purchase Decisions with Logistic Regression in Python0sStep 4a – Using Classifier Objects to Make Predictions in Machine Learning0sStep 4b – Evaluating Logistic Regression Model: Predicted vs Real Outcomes0sStep 5 – Evaluating Machine Learning Models: Confusion Matrix and Accuracy0sStep 6a – Creating a Confusion Matrix for Machine Learning Model Evaluation0sStep 6b – Visualizing Machine Learning Results: Training vs Test Set Comparison0sStep 7a – Visualizing Logistic Regression: 2D Plots for Classification Models0sStep 7b – Visualizing Logistic Regression: Interpreting Classification Results0sStep 7c – Visualizing Test Results: Assessing Machine Learning Model Accuracy0sLogistic Regression in Python – Step 7 (Colour-blind friendly image)Machine Learning Regression and Classification EXTRAEXTRA CONTENT: Logistic Regression Practical Case Study
-
Congratulations!! Don’t forget your Prize :)
*** As seen on Kickstarter ***
Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind’s AlphaGo beat the World champion at Go – a game where intuition plays a key role.
But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that’s why it’s at the heart of Artificial intelligence.
— Why Deep Learning A-Z? —
Here are five reasons we think Deep Learning A-Z really is different, and stands out from the crowd of other training programs out there:
1. ROBUST STRUCTURE
The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.
That’s why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.
2. INTUITION TUTORIALS
So many courses and books just bombard you with the theory, and math, and coding… But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that’s how this course is so different. We focus on developing an intuitive *feel* for the concepts behind Deep Learning algorithms.
With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.
3. EXCITING PROJECTS
Are you tired of courses based on over-used, outdated data sets?
Yes? Well then you’re in for a treat.
Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:
Artificial Neural Networks to solve a Customer Churn problem
Convolutional Neural Networks for Image Recognition
Recurrent Neural Networks to predict Stock Prices
Self-Organizing Maps to investigate Fraud
Boltzmann Machines to create a Recomender System
Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize
*Stacked Autoencoders is a brand new technique in Deep Learning which didn’t even exist a couple of years ago. We haven’t seen this method explained anywhere else in sufficient depth.
4. HANDS-ON CODING
In Deep Learning A-Z we code together with you. Every practical tutorial starts with a blank page and we write up the code from scratch. This way you can follow along and understand exactly how the code comes together and what each line means.
In addition, we will purposefully structure the code in such a way so that you can download it and apply it in your own projects. Moreover, we explain step-by-step where and how to modify the code to insert YOUR dataset, to tailor the algorithm to your needs, to get the output that you are after.
This is a course which naturally extends into your career.
5. IN-COURSE SUPPORT
Have you ever taken a course or read a book where you have questions but cannot reach the author?
Well, this course is different. We are fully committed to making this the most disruptive and powerful Deep Learning course on the planet. With that comes a responsibility to constantly be there when you need our help.
In fact, since we physically also need to eat and sleep we have put together a team of professional Data Scientists to help us out. Whenever you ask a question you will get a response from us within 48 hours maximum.
No matter how complex your query, we will be there. The bottom line is we want you to succeed.
— The Tools —
Tensorflow and Pytorch are the two most popular open-source libraries for Deep Learning. In this course you will learn both!
TensorFlow was developed by Google and is used in their speech recognition system, in the new google photos product, gmail, google search and much more. Companies using Tensorflow include AirBnb, Airbus, Ebay, Intel, Uber and dozens more.
PyTorch is as just as powerful and is being developed by researchers at Nvidia and leading universities: Stanford, Oxford, ParisTech. Companies using PyTorch include Twitter, Saleforce and Facebook.
So which is better and for what?
Well, in this course you will have an opportunity to work with both and understand when Tensorflow is better and when PyTorch is the way to go. Throughout the tutorials we compare the two and give you tips and ideas on which could work best in certain circumstances.
The interesting thing is that both these libraries are barely over 1 year old. That’s what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques.
— More Tools —
Theano is another open source deep learning library. It’s very similar to Tensorflow in its functionality, but nevertheless we will still cover it.
Keras is an incredible library to implement Deep Learning models. It acts as a wrapper for Theano and Tensorflow. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. This is what will allow you to have a global vision of what you are creating. Everything you make will look so clear and structured thanks to this library, that you will really get the intuition and understanding of what you are doing.
— Even More Tools —
Scikit-learn the most practical Machine Learning library. We will mainly use it:
to evaluate the performance of our models with the most relevant technique, k-Fold Cross Validation
to improve our models with effective Parameter Tuning
to preprocess our data, so that our models can learn in the best conditions
And of course, we have to mention the usual suspects. This whole course is based on Python and in every single section you will be getting hours and hours of invaluable hands-on practical coding experience.
Plus, throughout the course we will be using Numpy to do high computations and manipulate high dimensional arrays, Matplotlib to plot insightful charts and Pandas to import and manipulate datasets the most efficiently.
— Who Is This Course For? —
As you can see, there are lots of different tools in the space of Deep Learning and in this course we make sure to show you the most important and most progressive ones so that when you’re done with Deep Learning A-Z your skills are on the cutting edge of today’s technology.
If you are just starting out into Deep Learning, then you will find this course extremely useful. Deep Learning A-Z is structured around special coding blueprint approaches meaning that you won’t get bogged down in unnecessary programming or mathematical complexities and instead you will be applying Deep Learning techniques from very early on in the course. You will build your knowledge from the ground up and you will see how with every tutorial you are getting more and more confident.
If you already have experience with Deep Learning, you will find this course refreshing, inspiring and very practical. Inside Deep Learning A-Z you will master some of the most cutting-edge Deep Learning algorithms and techniques (some of which didn’t even exist a year ago) and through this course you will gain an immense amount of valuable hands-on experience with real-world business challenges. Plus, inside you will find inspiration to explore new Deep Learning skills and applications.
— Real-World Case Studies —
Mastering Deep Learning is not just about knowing the intuition and tools, it’s also about being able to apply these models to real-world scenarios and derive actual measurable results for the business or project. That’s why in this course we are introducing six exciting challenges:
#1 Churn Modelling Problem
In this part you will be solving a data analytics challenge for a bank. You will be given a dataset with a large sample of the bank’s customers. To make this dataset, the bank gathered information such as customer id, credit score, gender, age, tenure, balance, if the customer is active, has a credit card, etc. During a period of 6 months, the bank observed if these customers left or stayed in the bank.
Your goal is to make an Artificial Neural Network that can predict, based on geo-demographical and transactional information given above, if any individual customer will leave the bank or stay (customer churn). Besides, you are asked to rank all the customers of the bank, based on their probability of leaving. To do that, you will need to use the right Deep Learning model, one that is based on a probabilistic approach.
If you succeed in this project, you will create significant added value to the bank. By applying your Deep Learning model the bank may significantly reduce customer churn.
#2 Image Recognition
In this part, you will create a Convolutional Neural Network that is able to detect various objects in images. We will implement this Deep Learning model to recognize a cat or a dog in a set of pictures. However, this model can be reused to detect anything else and we will show you how to do it – by simply changing the pictures in the input folder.
For example, you will be able to train the same model on a set of brain images, to detect if they contain a tumor or not. But if you want to keep it fitted to cats and dogs, then you will literally be able to a take a picture of your cat or your dog, and your model will predict which pet you have. We even tested it out on Hadelin’s dog!
#3 Stock Price Prediction
In this part, you will create one of the most powerful Deep Learning models. We will even go as far as saying that you will create the Deep Learning model closest to “Artificial Intelligence”. Why is that? Because this model will have long-term memory, just like us, humans.
The branch of Deep Learning which facilitates this is Recurrent Neural Networks. Classic RNNs have short memory, and were neither popular nor powerful for this exact reason. But a recent major improvement in Recurrent Neural Networks gave rise to the popularity of LSTMs (Long Short Term Memory RNNs) which has completely changed the playing field. We are extremely excited to include these cutting-edge deep learning methods in our course!
In this part you will learn how to implement this ultra-powerful model, and we will take the challenge to use it to predict the real Google stock price. A similar challenge has already been faced by researchers at Stanford University and we will aim to do at least as good as them.
#4 Fraud Detection
According to a recent report published by Markets & Markets the Fraud Detection and Prevention Market is going to be worth $33.19 Billion USD by 2021. This is a huge industry and the demand for advanced Deep Learning skills is only going to grow. That’s why we have included this case study in the course.
This is the first part of Volume 2 – Unsupervised Deep Learning Models. The business challenge here is about detecting fraud in credit card applications. You will be creating a Deep Learning model for a bank and you are given a dataset that contains information on customers applying for an advanced credit card.
This is the data that customers provided when filling the application form. Your task is to detect potential fraud within these applications. That means that by the end of the challenge, you will literally come up with an explicit list of customers who potentially cheated on their applications.
#5 & 6 Recommender Systems
From Amazon product suggestions to Netflix movie recommendations – good recommender systems are very valuable in today’s World. And specialists who can create them are some of the top-paid Data Scientists on the planet.
We will work on a dataset that has exactly the same features as the Netflix dataset: plenty of movies, thousands of users, who have rated the movies they watched. The ratings go from 1 to 5, exactly like in the Netflix dataset, which makes the Recommender System more complex to build than if the ratings were simply “Liked” or “Not Liked”.
Your final Recommender System will be able to predict the ratings of the movies the customers didn’t watch. Accordingly, by ranking the predictions from 5 down to 1, your Deep Learning model will be able to recommend which movies each user should watch. Creating such a powerful Recommender System is quite a challenge so we will give ourselves two shots. Meaning we will build it with two different Deep Learning models.
Our first model will be Deep Belief Networks, complex Boltzmann Machines that will be covered in Part 5. Then our second model will be with the powerful AutoEncoders, my personal favorites. You will appreciate the contrast between their simplicity, and what they are capable of.
And you will even be able to apply it to yourself or your friends. The list of movies will be explicit so you will simply need to rate the movies you already watched, input your ratings in the dataset, execute your model and voila! The Recommender System will tell you exactly which movies you would love one night you if are out of ideas of what to watch on Netflix!
— Summary —
In conclusion, this is an exciting training program filled with intuition tutorials, practical exercises and real-World case studies.
We are super enthusiastic about Deep Learning and hope to see you inside the class!
Kirill & Hadelin
What's included
- 22 hours on-demand video
- 35 articles
- Access on mobile and TV
- Certificate of completion