Senior Professor
Dept. of Computer Science and Information Systems
Scope and Objectives of the course: This course is an undergraduate course on Machine Learning. ML is the sub-field of Artificial Intelligence. It helps engineers build automated systems that learn from experiences or examples. It helps machines make data-driven decisions. For example, Google Maps for navigation uses the route network, real-time traffic characteristics, time of travel etc. to predict an appropriate path for you using ML algorithms. ML is a multi-disciplinary field, with roots in Computer science, and Mathematics. ML methods are best described using linear and matrix algebra and their behaviours are best understood using the tools of probability and statistics. According to the latest estimates, 328 million terabytes of data are created daily. With this increasing amounts of data, the need for automated methods for data analysis continues to grow. The goal of ML is to develop methods that can automatically detect patterns in data, and then use the uncovered patterns to predict the future outcomes of interest. This course will cover many ML models and algorithms, including linear models, multi-layer neural networks, support vector machines, density estimation methods, Bayesian belief networks, mixture models, clustering, ensemble methods, and reinforcement learning. The course objectives are the following:
Course Handout: Click here.
Class Presentations:
Sl. No. | Topic | Class Presentations |
1. | Course Administration and Motivation | |
2. | Machine Learning Overview | |
3. | Machine Learning Frameworks | |
4. | Symbolic Learning - I (Version Space) | Click here |
5. | Symbolic Learning - II (Decision Trees/ Random Forests) | |
6. | Model Evaluation (Bias, Variance, Cross-validation, Confusion Matrix, Out-of-Bag metric etc.) | |
7. | Regression Models (Linear regression, Logistic Regression, Gradient Descent, Stochastic GD) | |
8. | Linear Discriminant Functions for Classification, Least Squares for Classification, Fisher's Discriminant Function | |
9. | Probabilistic approach to Machine Learning (Bayesian Networks, Naïve Bayes Algorithm) | |
10. | Neural Networks - I (Connectionist Models: Perceptron, Multi Layer Perceptron (MLP), Back Propagation Algorithm) | |
11. | Neural Networks - II (Regularization, Data Augmentation, Convolutional Neural Networks, Recurrent Neural Networks, Autoregressive Models and Generative Adversarial Networks (GANs)) | |
12. | Instance-based and Kernel-based Learning (k-Nearest Neighbor (k-NN), and Support Vector Machines (SVMs)) | |
13. | Un-supervised learning (K-Means Clustering, Gaussian Mixture Models, Principal Component Analysis (PCA) for feature reduction) | |
14. | Re-inforcement Learning (Markov Decision Process and Q-Learning by Prof. Manoj Kr. Jha) |
Programming Assignments:
Sl. No. | Title | Date of Submission | Problem statement |
1. | Data Exploration and Pre-Processing | 25.01.2024 | |
2. | TensorFlow's Decision Forests: Random Forests and Gradient Boosted Trees | 12.02.2024 | |
3. | Linear and Logistic Regression using TensorFlow | 01.03.2024 | |
4. | Gaussian Naïve Bayes Classifier using Scikit Learn | 05.04.2024 | |
5. | Back Propagation Neural Network (BPN) for Regression Task using PyTorch | 12.04.2024 | |
6. | Mini-Project on Convolutional Neural Networks (CNNs): Deep Learning | 30.04.2024 |
------- ~ --------