Difference between revisions of "Course syllabus: Bayesian model selection and multimodeling"
(Created page with "Bayesian model selection and multimodeling Course page: https://github.com/Intelligent-Systems-Phystech/BMM-21 The lecture course delivers the main problem of machine learnin...") |
|||
(3 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | Bayesian model selection and multimodeling | + | {{#seo: |
− | + | |title=Bayesian model selection and multimodeling | |
− | + | |titlemode=replace | |
− | The lecture course delivers the main problem of machine learning, the problem of model selection. One can set a heuristic model and optimize its parameters, select a model from a class, make a teacher model to transform its knowledge into a student model, or even make an ensemble from models. Behind all these strategies is a fundamental technique: the Bayesian inference. It assumes hypotheses about the measured data set, the model parameters, and even about the model structure. | + | |keywords=Bayesian model selection and multimodeling |
+ | |description=The Bayesian model selection and multimodeling course delivers the main problem of machine learning, the problem of model selection. | ||
+ | }} | ||
+ | <!--Bayesian model selection and multimodeling--> | ||
+ | The lecture course delivers the main problem of machine learning, the problem of model selection. One can set a heuristic model and optimize its parameters, select a model from a class, make a teacher model to transform its knowledge into a student model, or even make an ensemble from models. Behind all these strategies is a fundamental technique: the Bayesian inference. It assumes hypotheses about the measured data set, the model parameters, and even about the model structure. It deduces the error function to optimize. This is called the Minimum Description Length principle. It selects simple, stable, and precise models. This course joins the theory and the practical lab works of model selection and multimodeling. [https://github.com/Intelligent-Systems-Phystech/BMM-21 Course page] | ||
===Grading=== | ===Grading=== |
Latest revision as of 01:14, 13 February 2024
The lecture course delivers the main problem of machine learning, the problem of model selection. One can set a heuristic model and optimize its parameters, select a model from a class, make a teacher model to transform its knowledge into a student model, or even make an ensemble from models. Behind all these strategies is a fundamental technique: the Bayesian inference. It assumes hypotheses about the measured data set, the model parameters, and even about the model structure. It deduces the error function to optimize. This is called the Minimum Description Length principle. It selects simple, stable, and precise models. This course joins the theory and the practical lab works of model selection and multimodeling. Course page
Contents
Grading
- Labs: 6 in total
- Forms: 1 in total
- Reports: 2 in total
- The maximum score is 11, so the final score is MIN(10, score)
Syllabus
- 8.09 Intro
- 15.09 Distributions, expectation, likelihood
- 22.09 Bayesian inference
- 29.09 MDL, Minimum description length principle
- 6.10 Probabilistic metric spaces
- 13.10 Generative and discriminative models
- 20.10 Data generation, VAE, GAN
- 27.10 Probabilistic graphical models
- 3.11 Variational inference
- 10.11 Variational inference 2
- 17.11 Hyperparameter optimization
- 24.11 Meta-optimization
- 1.12 Bayesian PCA, GLM and NN
- 8.12 Gaussian processes
References
- Bishop, Barber, Murphy, Rasmussen and Williams, Taboga to catch up
- Kuznetsov M.P., Tokmakova A.A., Strijov V.V. Analytic and stochastic methods of structure parameter estimation // Informatica, 2016, 27(3): 607-624.
- Bakhteev O.Y., Strijov V.V. Deep learning model selection of suboptimal complexity // Automation and Remote Control, 2018, 79(8): 1474–1488.
- Bakhteev O.Y., Strijov V.V. Comprehensive analysis of gradient-based hyperparameter optimization algorithms // Annals of Operations Research, 2020: 1-15.
Syllabus
Bayesian models and ensembles (variant)
- Models, distributions, expectation, likelihood
- Algebra on distributions, ways of marginalisation and reconstruction of joint
- Bayesian inference
- Probabilistic metric spaces
- Generative and discriminative models
- Data generation, VAE, GAN
- Probabilistic graphical models
- Variational inference
- Bayesian PCA, GLM and NN
- Gaussian processes
- Belief propagation, networks, and hierarchical models
- Bayesian inference for model selection
- Structure priors and model selection
- Informative prior and MCMC
- Sampling, importance, Metropolis-Hastings
- Random processes and genetics for model generation
- Model ensembles
- Mixture of experts
- Distilling and privileged learning
- Transfer learning, multitask learning
- Domain adaptation
- Projection to latent space
- Bayesian agents
- Multi-agents and reinforcement