• X
  • LinkedIn
ML Made Simple

ML Made Simple

  • Home
  • Machine Learning
    • Part1: Supervised and Unsupervised
    • Part2: Regression and Classification
    • Part3: Bias and Variance
    • Part4: Underfitting and Overfitting
    • Part5: K-Fold Cross Validation
    • Part6: Lasso and Ridge
    • Part7: Feature Engineering
    • Part8: Ensemble Methods
    • Part9: Decision Trees
    • Part10: Random Forest
    • Part11: K-NN
  • Deep Learning
    • Part1: Convolution
    • Part2: Bias
    • Part3: Activation Functions
    • Part4: Batch Normalisation
    • Part5: Pooling layer
    • Part6: Data Augmentation
    • Part7: Losses
    • Part8: Gradient
    • Part9: Optimizers
    • Part10: Back-Propagation and Training
  • Azure Machine Learning
    • Part1: Workspace
    • Part2: Compute Resources
    • Part3: Exploring Data
    • Part4: Train Using Azure ML
    • Part5: Deploy
    • Part6: Clean-up
  • News

Deep Learning

  • Streamlining Neural Network Training: The Power of Back-Propagation
    December 15, 2023

    Streamlining Neural Network Training: The Power of Back-Propagation

    Puru Dewan
  • Mastering Optimizers in CNNs: SGD to Adam
    December 14, 2023

    Mastering Optimizers in CNNs: SGD to Adam

    Puru Dewan
  • Optimizing CNNs with Gradient Calculation: A Deep Dive
    December 14, 2023

    Optimizing CNNs with Gradient Calculation: A Deep Dive

    Puru Dewan
  • Navigating Loss Functions in Deep Learning: From MSE to Triplet Loss
    December 13, 2023

    Navigating Loss Functions in Deep Learning: From MSE to Triplet Loss

    Puru Dewan
  • Enhancing Machine Learning Models with Data Augmentation in PyTorch
    December 12, 2023

    Enhancing Machine Learning Models with Data Augmentation in PyTorch

    Puru Dewan
  • Demystifying Pooling Layers in CNNs: MaxPool, AvgPool, and More
    December 11, 2023

    Demystifying Pooling Layers in CNNs: MaxPool, AvgPool, and More

    Puru Dewan
  • Maximizing Efficiency with Batch Normalization in CNNs
    December 11, 2023

    Maximizing Efficiency with Batch Normalization in CNNs

    Puru Dewan
  • Exploring Activation Functions in Neural Networks: ReLU, Sigmoid, and TanH
    December 11, 2023

    Exploring Activation Functions in Neural Networks: ReLU, Sigmoid, and TanH

    Puru Dewan
  • Understanding the Role of Bias in Convolutional Neural Networks (CNN)
    December 9, 2023

    Understanding the Role of Bias in Convolutional Neural Networks (CNN)

    Puru Dewan
  • Understanding Convolution Neural Networks (CNN)
    December 9, 2023

    Understanding Convolution Neural Networks (CNN)

    Puru Dewan

Subscribe to our newsletters. We’ll keep you in the loop.

  • About
  • Contact Us
  • Privacy Policy
  • Terms of Service
  • X
  • LinkedIn

© 2023 ML Made Simple. All Rights Reserved

 

Loading Comments...