Machine Learning (CS 567)
Fall 2015

Dr. Nazar Khan

The ability of biological brains to sense, perceive, analyse and recognise patterns can only be described as stunning. Furthermore, they have the ability to learn from new examples. Mankind's understanding of how biological brains operate exactly is embarrassingly limited.

However, there do exist numerous 'practical' techniques that give machines the 'appearance' of being intelligent. This is the domain of Statistical Pattern Recognition and Machine learning. Instead of attempting to mimic the complex workings of a biological brain, this course aims at explaining mathematically well-founded and empirically successful techniques for analysing patterns and learning from them.

Accordingly, this course is a mathematically involved introduction into the field of pattern recognition and machine learning. It will prepare students for further study/research in the areas of Pattern Recognition, Machine Learning, Computer Vision, Data Analysis and other areas attempting to solve Artificial Intelligence (AI) type problems.

Passing this course is necessary for students planning to undertake research with Dr. Nazar Khan.
Course Outline

Prerequisites:
The course is designed to be self-contained. So the required mathematical details will be covered in the lectures. However, this is a math-heavy course. Students are encouraged to brush up on their knowledge of

  1. calculus (differentiation, partial derivatives)
  2. linear algebra (vectors, matrices, dot-product, orthogonality, eigenvectors, SVD)
  3. probability and statistics
The students should know that the only way to benefit from this course is to be prepared to spend lots of hours reading the text book and attempting its exercises (preferably) alone or with a class-fellow.

Text:

  1. (Required) Pattern Recognition and Machine Learning by Christopher M. Bishop (2006)
  2. (Recommended) Pattern Classification by Duda, Hart and Stork (2001)

Lectures:
Tuesday2:30 pm - 4:00 pmAl Khwarizmi Lecture Theater
Thursday2:30 pm - 4:00 pmAl Khwarizmi Lecture Theater

Office Hours:
Thursday05:00 pm - 07:00 pm

Teaching Assistant:
Umar Farooq mscsf14m038@pucit.edu.pk

Programming Environment: MATLAB

Grading:
Assignments 20%
Quizzes 5%
Mid-Term 35%
Final 40%

  1. To determine course grade, graduate students will be evaluated in a more rigorous manner.
  2. Theoretical assignments have to be submitted before the lecture on the due date.
  3. There will be no make-up for any missed quiz.
  4. Make-up for a mid-term or final exam will be allowed only under exceptional circumstances provided that the instructor has been notified beforehand.
  5. Instructor reserves the right to deny requests for any make-up quiz or exam.
  6. Worst score on quizzes will be dropped.
  7. Worst score on assignments will be dropped.

Assignments:
#AssignedDue
Assignment 1Tuesday, November 10, 2015Thursday, November 19, 2015
Assignment 2Friday, November 27, 2015Thursday, December 10, 2015
Assignment 3Thursday, December 10, 2015Thursday, December 17, 2015
Assignment 4Monday, December 14, 2015Monday, December 21, 2015
Assignment 5Monday, January 11, 2016Friday, January 15, 2016

Content:

  1. Lectures 1 to 4: Introduction[Handouts]
    • Introduction
    • Curve Fitting (Over-fitting vs. Generalization)
    • Regularized Curve Fitting
    • Probability
  2. Lectures 5 to 8: Background Mathematics[Handouts]
    • Gaussian Distribution
    • Fitting a Gaussian Distribution to Data
    • Probabilistic Curve Fitting (Maximum Likelihood (ML) Estimation)
    • Bayesian Curve Fitting (Maximum Posterier (MAP) Estimation)
    • Model Selection (Cross Validation)
    • Calculus of variations
    • Lagrange Multipliers
  3. Lectures 9 to 13: Decision Theory and Information Theory[Handouts]
    • Decision Theory
      • Minimising number of misclassifications
      • Minimising expected loss
      • Benefits of knowing posterior distributions
      • Generative vs Discriminative vs. Discriminant functions
      • Loss functions for regression problems
    • Information Theory
      • Information ∝ 1/Probability
      • Entropy = expected information (measure of uncertainty)
        • Maximum Entropy Discrete Distribution (Uniform)
        • Maximum Entropy Continuous Distribution (Gaussian)
      • Jensen's Inequality
      • Relative Entropy (KL divergence)
      • Mutual Information
  4. Lectures 14 to 17: Probability Distributions and Parametric Density Estimation[Handouts]
    • Density Estimation is fundamentally ill-posed
    • Parametric Density Estimation
    • Probability Distributions
      • Bernoulli
      • Binomial
      • Beta
      • Multinomial
      • Dirichlet
      • Gaussian
    • Completing-the-square
    • Sequential Learning via Conjugate Priors
  5. Lectures 18 to 19: Non-Parametric Density Estimation[Handouts]
    • Non-Parametric Density Estimation
      • Histogram based
      • Kernel estimators
      • Nearest neighbours
  6. Lectures 20 to 21: Linear Models for Regression[Handouts]
    • Equivalence of likelihood maximisation (ML) and SSE minimisation (Least Squares)
    • Design matrix
    • Pseudoinverse
    • Regularized least-squares estimation
    • Linear regression for multivariate targets
  7. Lectures 22 to 25: Linear Models for Classification[Handouts]
    • Least-squares
    • Fisher's Linear Discriminant (FLD)
    • Perceptron