Machine Learning (CS 567)
Fall 2015
Dr. Nazar
Khan
The ability of biological brains to sense, perceive, analyse and recognise patterns can only be described as stunning. Furthermore, they have the ability to learn from new examples. Mankind's understanding of how biological brains operate exactly is embarrassingly limited.
However, there do exist numerous 'practical' techniques that give machines the 'appearance' of being intelligent. This is the domain of Statistical Pattern Recognition and Machine learning. Instead of attempting to mimic the complex workings of a biological brain, this course aims at explaining mathematically wellfounded and empirically successful techniques for analysing patterns and learning from them.
Accordingly, this course is a mathematically involved introduction into the field of pattern recognition and machine learning. It will prepare students for further study/research in the areas of Pattern Recognition, Machine Learning, Computer Vision, Data Analysis and other areas attempting to solve Artificial Intelligence (AI) type problems.
Passing this course is necessary for students planning to undertake research with Dr. Nazar Khan.
Course Outline
Prerequisites:
The course is designed to be selfcontained. So the required mathematical details will be covered in the lectures. However, this is a mathheavy course. Students are encouraged to brush up on their knowledge of
 calculus (differentiation, partial derivatives)
 linear algebra (vectors, matrices, dotproduct, orthogonality, eigenvectors, SVD)
 probability and statistics
The students should know that the only way to benefit from this course is to be prepared to spend lots of hours reading the text book and attempting its exercises (preferably) alone or with a classfellow.
Text:
 (Required) Pattern Recognition and Machine Learning by Christopher M. Bishop (2006)
 (Recommended) Pattern Classification by Duda, Hart and Stork (2001)
Lectures:
Tuesday  2:30 pm  4:00 pm  Al Khwarizmi Lecture Theater 
Thursday  2:30 pm  4:00 pm  Al Khwarizmi Lecture Theater 
Office Hours:
Thursday  05:00 pm  07:00 pm 
Teaching Assistant:
Umar Farooq mscsf14m038@pucit.edu.pk
Programming Environment: MATLAB
 MATLAB Resources (by Aykut Erdem):
Grading:
Assignments 
20% 
Quizzes 
5% 
MidTerm 
35% 
Final 
40% 
 To determine course grade, graduate students will be evaluated in a more rigorous manner.
 Theoretical assignments have to be submitted before the lecture on the due date.
 There will be no makeup for any missed quiz.
 Makeup for a midterm or final exam will be allowed only under exceptional
circumstances provided that the instructor has been notified beforehand.
 Instructor reserves the right to deny requests for any makeup quiz or exam.
 Worst score on quizzes will be dropped.
 Worst score on assignments will be dropped.
Assignments:
#  Assigned  Due 
Assignment 1  Tuesday, November 10, 2015  Thursday, November 19, 2015 
Assignment 2  Friday, November 27, 2015  Thursday, December 10, 2015 
Assignment 3  Thursday, December 10, 2015  Thursday, December 17, 2015 
Assignment 4  Monday, December 14, 2015  Monday, December 21, 2015 
Assignment 5  Monday, January 11, 2016  Friday, January 15, 2016 
Content:
 Lectures 1 to 4: Introduction[Handouts]
 Introduction
 Curve Fitting (Overfitting vs. Generalization)
 Regularized Curve Fitting
 Probability
 Lectures 5 to 8: Background Mathematics[Handouts]
 Gaussian Distribution
 Fitting a Gaussian Distribution to Data
 Probabilistic Curve Fitting (Maximum Likelihood (ML) Estimation)
 Bayesian Curve Fitting (Maximum Posterier (MAP) Estimation)
 Model Selection (Cross Validation)
 Calculus of variations
 Lagrange Multipliers
 Lectures 9 to 13: Decision Theory and Information Theory[Handouts]
 Decision Theory
 Minimising number of misclassifications
 Minimising expected loss
 Benefits of knowing posterior distributions
 Generative vs Discriminative vs. Discriminant functions
 Loss functions for regression problems
 Information Theory
 Information ∝ 1/Probability
 Entropy = expected information (measure of uncertainty)
 Maximum Entropy Discrete Distribution (Uniform)
 Maximum Entropy Continuous Distribution (Gaussian)
 Jensen's Inequality
 Relative Entropy (KL divergence)
 Mutual Information
 Lectures 14 to 17: Probability Distributions and Parametric Density Estimation[Handouts]
 Density Estimation is fundamentally illposed
 Parametric Density Estimation
 Probability Distributions
 Bernoulli
 Binomial
 Beta
 Multinomial
 Dirichlet
 Gaussian
 Completingthesquare
 Sequential Learning via Conjugate Priors
 Lectures 18 to 19: NonParametric Density Estimation[Handouts]
 NonParametric Density Estimation
 Histogram based
 Kernel estimators
 Nearest neighbours
 Lectures 20 to 21: Linear Models for Regression[Handouts]
 Equivalence of likelihood maximisation (ML) and SSE minimisation (Least Squares)
 Design matrix
 Pseudoinverse
 Regularized leastsquares estimation
 Linear regression for multivariate targets
 Lectures 22 to 25: Linear Models for Classification[Handouts]
 Leastsquares
 Fisher's Linear Discriminant (FLD)
 Perceptron
