Advanced Machine Learning (CS 667)
Spring 2017

Dr. Nazar Khan

The ability of biological brains to sense, perceive, analyse and recognise patterns can only be described as stunning. Furthermore, they have the ability to learn from new examples. Mankind's understanding of how biological brains operate exactly is embarrassingly limited.

However, there do exist numerous 'practical' techniques that give machines the 'appearance' of being intelligent. This is the domain of statistical pattern recognition and machine learning. Instead of attempting to mimic the complex workings of a biological brain, this course aims at explaining mathematically well-founded techniques for analysing patterns and learning from them.

This course is an extension of CS 567 -- Machine Learning and is therefore a mathematically involved introduction into the field of pattern recognition and machine learning. It will prepare students for further study/research in the areas of Pattern Recognition, Machine Learning, Computer Vision, Data Analysis and other areas attempting to solve Artificial Intelligence (AI) type problems.

Pre-requisite(s): CS 567 -- Machine Learning

Text:

  1. (Required) Pattern Recognition and Machine Learning by Christopher M. Bishop (2006)
  2. (Recommended) Pattern Classification by Duda, Hart and Stork (2001)

Lectures:
Tuesday9:45 am - 11:15 amAl Khwarizmi Lecture Theater
Thursday9:45 am - 11:15 amAl Khwarizmi Lecture Theater

Office Hours:
Wednesday10:00 am - 01:00 pm

Programming Environment: MATLAB

Grading Scheme/Criteria:
CategoryWeightEffective* Weight
Quizzes5%5%
Assignments12%30%
Project8%15%
Mid-Term35%20%
Final40%30%
*The current grading scheme is a PU requirement that I do not agree with. Assignments and the project will actually constitute 30% of the course. This will be acheived by awarding 15% in the mid-term and 10% in the final based on performance in the assignments and project. So the mid-term is effectively 20% of the grade and the final is effectively 30% of the grade.

Assignments

  • Logistic Regression
    • Assignment 1: Implement a binary Logistic Regression classifier and train it using the IRLS algorithm to recognise hand-written digits for 2 classes from the MNIST dataset. (Due: Monday, March 7th, 2017)
    • Assignment 2: Implement a multiclass Logistic Regression classifier and train it using SGD to recognise hand-written digits from the MNIST dataset. (Due: Monday, March 14th, 2017)
  • Neural Networks
    • Assignment 3: Implement the backpropagation algorithm for MLP training and regenerate Figure 5.3 from Bishop's book. (Due: Monday, March 21st, 2017)
  • Convolutional Neural Networks
    • Assignment 4: Implement a Convolutional Neural Network for classification and train it to recognise hand-written digits from the MNIST dataset. (Due: Tuesday, April 11th, 2017)
  • PCA
    • Assignment 5: Implement Principal Component Analysis and regenerate Figures 12.3, 12.4 and 12.5 from Bishop's book. (Due: Tuesday, April 25th, 2017)
    • Assignment 6: Implement Principal Component Analysis for classification and use it to recognise hand-written digits from the MNIST dataset. (Due: Tuesday, May 2nd, 2017)
  • Density estimation via Gaussian Mixture Model (GMM)
    • Assignment 7: Implement a generic implementation of learning a GMM via the EM algorithm and regenerate Figure 9.8 from Bishop's book. (Due: Thursday, May 18th, 2017)
  • Multimodal conditional density estimation via Mixture Density Network (MDN)
    • Assignment 8: Implement a generic implementation of learning an MDN and regenerate Figures 5.19 and 5.21 from Bishop's book. (Due: Monday, June 6th, 2017)

Content

  1. Neural Networks
    • Mathematical model of a single neuron
    • Learn optimal features φ* as well as weights w* for those features
    • Multilayer Perceptrons
    • Back-propagation
    • Regularization Techniques
      • Weight decay
      • Per-layer weight decay
      • Early stopping
      • Training with transformed data
      • Tangent propagation
    • Convolutional Neural Networks
      • Neurons as detectors
      • Invariance
      • Local correlation property of images
      • Receptive field
      • Feature maps
      • Weight sharing
  2. Principal Component Analysis
    • Dimensionality Reduction, Data Compression, Feature Extraction
    • Maximum Variance Formulation of PCA
    • PCA for high-dimensional data
    • Whitening
    • Classification via PCA
  3. Autoencoders
    • Autoassociative Neural Network
    • Equivalence with PCA
    • Non-linear PCA
    • Autoencoders and Deep Learning
  4. Support Vector Machines and Kernel Methods
    • Maximising the margin -- hard constraints
    • Lagrange Multipliers Method for Constrained Optimization
      • Maximization with equality constraint
      • Minimization with equality constraint
      • Maximization with inequality constraint
      • Minimization with inequality constraint
      • Optimization with multiple constraints
    • Dual formulations
    • Kernel Trick
    • Improving generalisation -- soft constraints
  5. Latent Variable Models
  6. Combining Models
  7. Spectral Clustering