Advanced Machine Learning (CS 667)
Spring 2016

Dr. Nazar Khan

The ability of biological brains to sense, perceive, analyse and recognise patterns can only be described as stunning. Furthermore, they have the ability to learn from new examples. Mankind's understanding of how biological brains operate exactly is embarrassingly limited.

However, there do exist numerous 'practical' techniques that give machines the 'appearance' of being intelligent. This is the domain of statistical pattern recognition and machine learning. Instead of attempting to mimic the complex workings of a biological brain, this course aims at explaining mathematically well-founded techniques for analysing patterns and learning from them.

This course is an extension of CS 567 -- Machine Learning and is therefore a mathematically involved introduction into the field of pattern recognition and machine learning. It will prepare students for further study/research in the areas of Pattern Recognition, Machine Learning, Computer Vision, Data Analysis and other areas attempting to solve Artificial Intelligence (AI) type problems.

Pre-requisite(s): CS 567 -- Machine Learning

Text:

  1. (Required) Pattern Recognition and Machine Learning by Christopher M. Bishop (2006)
  2. (Recommended) Pattern Classification by Duda, Hart and Stork (2001)

Lectures:
Monday8:15 am - 9:45 amAl Khwarizmi Lecture Theater
Wednesday8:15 am - 9:45 amAl Khwarizmi Lecture Theater

Office Hours:
Wednesday10:00 am - 01:00 pm

Programming Environment: MATLAB

Grading Scheme/Criteria:
CategoryWeightEffective* Weight
Assignments and Quizes10%10%
Projects15%40%
Mid-Term35%20%
Final40%30%
*The current grading scheme is a PU requirement that I do not agree with. Projects will actually constitute 40% of the course. This will be acheived by awarding 15% in the mid-term and 10% in the final based on performance in the projects. So the mid-term is effectively 20% of the grade and the final is effectively 30% of the grade.

Projects

  1. Logistic Regression
    1. Implement a binary Logistic Regression classifier and train it using the IRLS algorithm to recognise hand-written digits for 2 classes from the MNIST dataset. (Due: Monday, March 7th, 2016)
    2. Implement a multiclass Logistic Regression classifier and train it using SGD to recognise hand-written digits from the MNIST dataset. (Due: Monday, March 14th, 2016)
  2. Neural Networks
    1. Implement the backpropagation algorithm for MLP training and regenerate Figure 5.3 from Bishop's book. (Due: Monday, March 21st, 2016)
  3. Convolutional Neural Networks
    1. Implement a Convolutional Neural Network for classification and train it to recognise hand-written digits from the MNIST dataset. (Due: Monday, May 16th, 2016)
  4. PCA
    1. Implement Principal Component Analysis and regenerate Figures 12.3, 12.4 and 12.5 from Bishop's book. (Due: Monday, April 4th, 2016)
    2. Implement Principal Component Analysis for classification and use it to recognise hand-written digits from the MNIST dataset. (Due: Monday, April 11th, 2016)
  5. Density estimation via Gaussian Mixture Model (GMM)
    1. Implement a generic implementation of learning a GMM via the EM algorithm and regenerate Figure 9.8 from Bishop's book. (Due: Monday, May 30th, 2016)
  6. Multimodal conditional density estimation via Mixture Density Network (MDN)
    1. Implement a generic implementation of learning an MDN and regenerate Figures 5.19 and 5.21 from Bishop's book. (Due: Monday, June 6th, 2016)

Content

  1. Linear Models for Classification
    • Discriminant Functions
      • Least Squares Classification -- y(x)=f(w'x)
      • Fisher's Linear Discriminant -- J(w)=w'*S_b*w / w'*S_w*w
      • Perceptron -- y(x)=step(w'φ(x))
    • Probabilistic Generative Models -- model posterior p(C_k|x) via class-conditional p(x|C_k) and prior p(C_k)
    • Probabilistic Discriminative Models -- model posterior p(C_k|x) directly
      • Logistic Sigmoid function and its derivative
      • Softmax function and its derivative
      • Positive Definite matrix
      • Logistic Regression
      • Positive definite Hessian implies convexity which implies unique, global minimum
      • Newton-Raphson updates constitute IRLS algorithm.
      • Multiclass Logistic Regression
  2. Neural Networks
    • Mathematical model of a single neuron
    • Learn optimal features φ* as well as weights w* for those features
    • Multilayer Perceptrons
    • Back-propagation
    • Regularization Techniques
      • Weight decay
      • Per-layer weight decay
      • Early stopping
      • Training with transformed data
      • Tangent propagation
    • Convolutional Neural Networks
      • Neurons as detectors
      • Invariance
      • Local correlation property of images
      • Receptive field
      • Feature maps
      • Weight sharing
  3. Principal Component Analysis
    • Dimensionality Reduction, Data Compression, Feature Extraction
    • Maximum Variance Formulation of PCA
    • PCA for high-dimensional data
    • Whitening
    • Classification via PCA
  4. Support Vector Machines and Kernel Methods
    • Maximising the margin -- hard constraints
    • Lagrange Multipliers Method for Inequality Constraints
    • Dual formulations
    • Kernel Trick
    • Improving generalisation -- soft constraints
  5. Latent Variable Models
  6. Combining Models
  7. Autoassociative Neural Networks
  8. Spectral Clustering
  9. Graphical Models
  10. Learning over Sequential Data