Difference between revisions of "Course syllabus: Bayesian model selection"

From Research management course
Jump to: navigation, search
(Created page with "<!-- Course delivered in autumn 2021. Short URL of the page https://bit.ly/3DitwLA--> ===Part 1=== * Lecture 1: Introduction * Lecture 2: Naive Bayes classifier, expo...")
 
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
{{#seo:
 +
|title=Bayesian model selection - course syllabus
 +
|titlemode=replace
 +
|keywords=Bayesian model selection
 +
|description=This course is devoted to Bayesian model selection.
 +
}}
 
<!-- Course delivered in autumn 2021. Short URL of the page https://bit.ly/3DitwLA-->
 
<!-- Course delivered in autumn 2021. Short URL of the page https://bit.ly/3DitwLA-->
  
Line 25: Line 31:
 
**    Written report
 
**    Written report
  
===Part 2===Choice of priors, non-informative priors, and Jeffreys distributions.
+
===Part 2===
 +
<!--Choice of priors, non-informative priors, and Jeffreys distributions-->
 
*    Lecture 1 EM-algorithm
 
*    Lecture 1 EM-algorithm
 
*    Lecture 2 Applications of the EM algorithm
 
*    Lecture 2 Applications of the EM algorithm

Latest revision as of 01:10, 13 February 2024

Part 1

  • Lecture 1: Introduction
  • Lecture 2: Naive Bayes classifier, exponential family of distributions
    • Exercise 1
  • Lecture 3: Bayesian linear regression and Model evidence
  • Lecture 4: Model evidence
    • Test 1
  • Lecture 5: Analysis of model evidence and statistical significance
    • Task 2
    • Practice 1
    • Data for Practice 1
  • Lecture 6: Bayesian logistic regression and feature selection, EM algorithm
    • Task 3
  • Lecture 7: EM-algorithm and variational EM-algorithm, missed data
  • Lecture 8: Variational EM-algorithm
  • Lecture 9: Gaussian processes and evolution of models in time
  • Lecture 10: Construction of adequate multi-models
    • Task 4
    • Lecture 11: Monte Carlo Methods for Markov Chains
    • Practice 1 (continued)
  • Lecture 12: Hamiltonian Monte Carlo Methods for Markov Chains
  • Lecture 13: Bayesian optimization.
    • Written report

Part 2

  • Lecture 1 EM-algorithm
  • Lecture 2 Applications of the EM algorithm
  • Lecture 3 Variational EM-algorithm
  • Lecture 3 Practice on EM and the variational EM algorithm
  • Lecture 4 Hamiltonian Monte Carlo methods and comparison with the variational EM algorithm
  • Lecture 4 Practice on the variational EM algorithm and comparison with HMC
  • Lecture 5: Graphic models, Conditional independence of variables
    • Practice 1
    • Data for Practice 1
  • Lecture 6: Oriented and undirected graphical models and the relationship between them
  • Lecture 7: Factor graphs and exact inference in acyclic graphical models
  • Lecture 8: Max-Sum Algorithm and Hidden Markov Models
  • Lecture 9: Baum-Welch algorithm for estimating the parameters of hidden Markov models
  • Lecture 9: Practice on the Baum-Welch algorithm
  • Lecture 10: Algorithms for finding the minimum cut in graphs for output in graphical models
  • Lecture 11: TRW algorithm for inference in cyclic graphical models for total energy
  • Lecture 12: Estimation of hyperparameters of graphical models
    • Competition
    • Exam

References

  1. David MacKay, 2005, Information Theory, Inference, and Learning Algorithms
  2. Christopher Bishop, 2006, Pattern Recognition and Machine Learning
  3. David Barber, 2014, Bayesian Reasoning and Machine Learning
  4. Daphne Koller and Nir Friedman, 2009, Probabilistic Graphical Models
  5. Kevin P. Murphy, 2012, Machine Learning: a Probabilistic Perspective