Physics-informed machine learning

From Research management course
Revision as of 17:54, 19 August 2021 by Wiki (talk | contribs) (Created page with "=Machine Learning for Theoretical Physics= Physics-informed machine learning<br> (seminars by Andriy Graboviy and Vadim Strijov) ==Goals== The course consists of a series of...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Machine Learning for Theoretical Physics

Physics-informed machine learning
(seminars by Andriy Graboviy and Vadim Strijov)

Goals

The course consists of a series of group discussions devoted to various aspects of data modelling in continuous spaces. It will reduce the gap between the models of theoretical physics and the noisy measurements, performed under complex experimental circumstances. To show the selected neural network is an adequate parametrisation of the modelled phenomenon, we use geometrical axiomatic approach. We discuss the role of manifolds, tensors and differential forms in the neural network-based model selection.

The basics for the course are the book Geometric Deep Learning: April 2021 by Michael Bronstein et al. and the paper Physics-informed machine learning // Nature: May 2021 by George Em Karniadakis et al.

Structure of the talk

The talk is based on two-page essay ([template]).

  1. Field and goals of a method or a model
  2. An overview of the method
  3. Notable authors and references
  4. Rigorous description, the theoretical part
  5. Algorithm and link to the code
  6. Application with plots

Grading

Each student presents two talks. Each talk lasts 25 minutes and concludes with a five-minute written test. A seminar presentation gives 1 point, a formatted seminar text gives 1 point, a test gives 1 point, a reasonable test response gives 0.1 point. Bonus 1 point for a great talk.

Test

Todo: how make a test creative, not automised? Here be the test format.

Themes

  1. Spherical harmonics for mechanical motion modelling
  2. Tensor representations of the Brain computer interfaces
  3. Multi-view, kernels and metric spaces for the BCI and Brain Imaging
  4. Continuous-Time Representation and Legendre Memory Units for BCI
  5. Riemannian geometry on Shapes and diffeomorphisms for fMRI
  6. The affine connection setting for transformation groups for fMRI
  7. Strain, rotation and stress tensors modelling with examples
  8. Differential forms and fibre bundles with examples
  9. Modelling gravity with machine learning approaches
  10. Geometric manifolds, the Levi-Chivita connection and curvature tensors
  11. Flows and topological spaces
  12. Application for Normalizing flow models (stress on spaces, not statistics)
  13. Alignment in higher dimensions with RNN
  14. Navier-Stokes equations and viscous flow
  15. Newtonian and Non-Newtonian Fluids in Pipe Flows Using Neural Networks [1], [2]
  16. Applications of Geometric Algebra and experior product
  17. High-order splines
  18. Forward and Backward Fourier transform and iPhone lidar imaging analysis
  19. Fourier, cosine and Laplace transform for 2,3,4D and higher dimensions
  20. Spectral analysis on meshes
  21. Graph convolution and continuous Laplace operators

Schedule

Thursdays on 12:30 at m1p.org/go_zoom

  • September 2 9 16 23 30
  • October 7 14 21 28
  • November 4 11 18 25 
  • December 2 9


Date Theme Speaker Links
September 2 Course introduction and motivation Vadim Strijov GDL paper, Physics-informed
9
9
16
16
23
23
30
30
October 7
7
14
14
21
21
28
28
November 4
4
11
11
18
18
25
25
December 2
2
9 Final discussion and grading Andriy Graboviy


References

  • Geometric deep learning
  • Functional data analysis