Difference between revisions of "Functional Data Analysis"

From Research management course
Jump to: navigation, search
 
(69 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Intelligent Data Analysis 2024==
 
 
[https://t.me/+2kCkkAyLbUxiOGQ6 The chat-link]
 
[https://t.me/+2kCkkAyLbUxiOGQ6 The chat-link]
  
 
The statistical analysis of spatial time series requires additional methods of data analysis. First,  we suppose time is continuous, put to the state space changes <math>\frac{d\mathbf{x}}{dt}</math> and use neural ordinary and stochastic differential equations. Second, we analyze a multivariate and multidimensional time series and use the tensor representation and tensor analysis. Third, since the time series have significant cross-correlation we model them in the Riemannian space. Fourth, medical time series are periodic, the base model is the pendulum model, <math>\frac{d^2x}{dt^2}=-c\sin{x}</math>. We use physics-informed neural networks to approximate data. Fifth, the practical experiments involve multiple data sources. We use canonical correlation analysis with latent state space. This space aligns the source and target spaces and generates data in source and target manifolds.  
 
The statistical analysis of spatial time series requires additional methods of data analysis. First,  we suppose time is continuous, put to the state space changes <math>\frac{d\mathbf{x}}{dt}</math> and use neural ordinary and stochastic differential equations. Second, we analyze a multivariate and multidimensional time series and use the tensor representation and tensor analysis. Third, since the time series have significant cross-correlation we model them in the Riemannian space. Fourth, medical time series are periodic, the base model is the pendulum model, <math>\frac{d^2x}{dt^2}=-c\sin{x}</math>. We use physics-informed neural networks to approximate data. Fifth, the practical experiments involve multiple data sources. We use canonical correlation analysis with latent state space. This space aligns the source and target spaces and generates data in source and target manifolds.  
  
=== Applications ===  
+
==== Applications ====
 
This field of Machine Learning applies to any field where the measurements have continuous time and space data acquired from multimodal sources: climate modeling, neural interfaces, solid-state physics, electronics, fluid dynamics, and many more. We will carefully collect both the theory and its practice.
 
This field of Machine Learning applies to any field where the measurements have continuous time and space data acquired from multimodal sources: climate modeling, neural interfaces, solid-state physics, electronics, fluid dynamics, and many more. We will carefully collect both the theory and its practice.
 +
 +
== Course arrangement==
  
 
===Your profit===
 
===Your profit===
Your goal is to enhance your abilities to''' convey messages to the reader in the language of applied mathematics'''. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory.
+
Your goal is to enhance your abilities to''' convey messages''' to the reader in the '''language of applied mathematics'''. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory.
  
 
===Structure of a seminar===
 
===Structure of a seminar===
The semester has 10 weeks, and five couple of weeks for homework.  
+
The semester lasts 10 weeks, and five couple of weeks for homework.  
 
* Odd week: introduction to the topic and handout of a theme for the homework.
 
* Odd week: introduction to the topic and handout of a theme for the homework.
 
* Even week: a discussion of the essay, collecting the list of improvements to each essay.
 
* Even week: a discussion of the essay, collecting the list of improvements to each essay.
Line 22: Line 23:
 
The course gives two credits, so it requires time. The result is a two-page essay. It delivers an introduction to the designated topic. It could be automatically generated or collected from Wikipedia. The main requirement is that you be responsible for each statement of your essay. Each formula is yours.
 
The course gives two credits, so it requires time. The result is a two-page essay. It delivers an introduction to the designated topic. It could be automatically generated or collected from Wikipedia. The main requirement is that you be responsible for each statement of your essay. Each formula is yours.
  
The essay carries a comprehensive and strict answer to the topic question, illustrative plots are welcome. The result is ready to be compiled in a joint manuscript after Even week. So please use the LaTeX template.  
+
The essay carries a comprehensive and strict answer to the topic question, illustrative plots are welcome. The result is ready to compile in a joint manuscript after the Even week. So please use the LaTeX template.  
  
 
The style is the set theory, algebra, analysis, and Bayesian statistics. Category theory and homotopy theory are welcome.
 
The style is the set theory, algebra, analysis, and Bayesian statistics. Category theory and homotopy theory are welcome.
 +
 +
This course gives you two credits, so it is 76/10 = '''5 hours of weekly''' homework.
 +
 +
====Templated and links ====
 +
* The course Git Hub to download the homework essays
 +
* The overleaf to compile the joint manuscript
 +
* The LaTeX template for an essay
 +
* [https://t.me/+2kCkkAyLbUxiOGQ6 The course chat] to ask questions
  
 
====Requirements for the text and the discussion====
 
====Requirements for the text and the discussion====
Line 41: Line 50:
 
Automatic generation of mediocre-quality texts increased requirements for the quality of the new messages. It makes novelty rare and makes the authorship appreciated. But it simplifies the ways of delivering. So since textbook generation has become simple, we will use generative chats to train our skills of reader persuasion. The reader is our MS-thesis defense committee.
 
Automatic generation of mediocre-quality texts increased requirements for the quality of the new messages. It makes novelty rare and makes the authorship appreciated. But it simplifies the ways of delivering. So since textbook generation has become simple, we will use generative chats to train our skills of reader persuasion. The reader is our MS-thesis defense committee.
  
==Table of contents==
+
'''Additional remarks for clarification.'''
 +
Люди уже придумали все необходимое. Когда-то давно человечество развивалось очень бурно – постоянно менялись не только вещи, окружавшие людей, но и слова, которыми они пользовались. В те дни было много разных названий для творческого человека – инженер, поэт, ученый. И все они постоянно изобретали новое. Но это было детство человечества. А потом оно достигло зрелости. Творчество не исчезло - но оно стало сводиться к выбору из уже созданного. Говоря образно, мы больше не выращиваем виноград. Мы посылаем за бутылкой в погреб. Людей, которые занимаются этим, называют "сомелье". (В. Пелевин)
 +
 
 +
'''Avoid this style'''
 +
(reserved for the seminar)
 +
# [https://medium.com/p/b1a38847219d CCA comprehensive overwiev]
 +
# [https://towardsdatascience.com/principal-component-analysis-hands-on-tutorial-3a451ff3d5db PCA tutorial]
 +
 
 +
==Table of homeworks==
 +
These ten weeks we discuss the next five topics:
 
# Multimodal data
 
# Multimodal data
 +
# Continous time and space models
 +
# Physics-informed models
 
# Multilinear models
 
# Multilinear models
 
# Riemannian spaces
 
# Riemannian spaces
# Neural differential equations
 
# Generative models??
 
  
== Multimodal data==
+
Note that all these items enlighten stochastic-deterministic decomposition. So the questions include three parts:
=== First series of questions  ===
+
# deterministic model,
# Canonical Correlation Analysis: forecasting model and loss function with variants
+
# generative model,
# Canonical Correlation Analysis in tensor representation
+
# stochastic-deterministic decomposition method.
# CCA parameter estimation algorithm
+
See the questions below for your reference.
# Connection CCA and Cross-Attention  
+
 
# Generative CCA
+
=== Multimodal data ===
 +
First series
 +
# Canonical Correlation Analysis
 +
# CCA in tensor representation
 +
# Kernel CCA in Hilbert and L2[a,b] spaces
 +
# CCA versus Cross-Attention Transformers
 +
# Generative CCA,  diffusion, and flow
 
# Comparative analysis of variants of CCA like PLS and others
 
# Comparative analysis of variants of CCA like PLS and others
 +
# Functional PCA
 +
<!-- # Canonical Correlation Analysis: forecasting model and loss function with variants-->
 +
<!-- # CCA parameter estimation algorithm -->
  
 +
=== Continous models ===
 +
Second series
 +
# Neural ODE
 +
# Continous state space models
 +
# Continous normalizing flows
 +
# Ajoint method and continuous backpropagation
 +
# Neural Delayed Differential Equations <!-- # Neural CDE (PID control is welcome)-->
 +
# Neural PDE
 +
# S4 and Hippo models
 +
# Rimannian continuous models
  
 +
===Physics-Informed models===
 +
Third series
 +
# PINNs as multimodels
 +
# Spherical harmonics in p dimensions (an IMU example is welcome)
 +
# PDF and Physics-Informed learning
 +
# Integral Transforms in Physics-Informed learning
 +
 +
===Multilinear models and topology===
 +
Fourth series
 +
# Cliffort or Geometric algebra in machine learning
 +
# Tensor models, tensor decomposition, and approximation (tensor PLS pr CCA)
 +
# Machine learning models for tensors: Field Equation (Yang-Mills Equations_
 +
# Machine learning models for theoretical physics (Maxwell’s Equations, Navier-Stocks)
 +
# Persistent homology and dimensionality reduction (say, arXiv:2302.03447 with embedding delays)
 +
 +
===Generative and Riemannian models===
 +
Fifth series
 +
# Genertive Riemannian models. How do we extract and use the distribution?
 +
# Gererative Canonical Correlation Analysis and its connection with the Riemannian spaces in the latent part
 +
# Scoring-based Riemannian models. How do we extract and use the distribution?
 +
# Generative convolutional models for tensors. Is there a continuous-time? (A variant is the Riemannian Residual Networks).
 +
# Riemannian continuous normalizing flows. How do we generate a time series of a given distribution?
 +
 +
==References==
 
===General===
 
===General===
 
# Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [https://arxiv.org/abs/2307.08423 arxiv 2023]
 
# Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [https://arxiv.org/abs/2307.08423 arxiv 2023]
Line 68: Line 129:
 
# Deep Learning by ''C.M. and H. Bishops'' [https://www.bishopbook.com/ Springer 2024] (online version)
 
# Deep Learning by ''C.M. and H. Bishops'' [https://www.bishopbook.com/ Springer 2024] (online version)
 
# A Geometric Approach to Differential Forms ''by David Bachman'' [https://arxiv.org/abs/math/0306194v1 arxiv 2013]
 
# A Geometric Approach to Differential Forms ''by David Bachman'' [https://arxiv.org/abs/math/0306194v1 arxiv 2013]
# A Geometric Approach to Differential Forms ''by David Bachman'' [https://arxiv.org/abs/math/0306194v1 arxiv 2013]
+
# Advanced Calculus: Geometric View ''by James J. Callahan'' [https://download.tuxfamily.org/openmathdep/calculus_advanced/Advanced_Calculus-Callahan.pdf pdf 2010], [https://download.tuxfamily.org/openmathdep/calculus_advanced/Advanced_Calculus-Callahan.pdf collection]
 
+
# Geometric Deep Learning by Michael M. Bronstein [https://arxiv.org/pdf/2104.13478 arxiv 2021]
===1. Linear models===
+
=== Linear and bilinear models===
 
# A Tutorial on Independent Component Analysis [https://arxiv.org/abs/1404.2986 arxiv, 2014]
 
# A Tutorial on Independent Component Analysis [https://arxiv.org/abs/1404.2986 arxiv, 2014]
 
# On the Stability of Multilinear Dynamical Systems [https://arxiv.org/abs/2105.01041 arxiv 2022]
 
# On the Stability of Multilinear Dynamical Systems [https://arxiv.org/abs/2105.01041 arxiv 2022]
# Tensor-based Regression Models and Applications ''by Ming Hou'' Thèse [https://core.ac.uk/download/pdf/442636056.pdf Uni-Laval 2017]
+
# Tensor-based Regression Models and Applications ''by Ming Hou'' Thèse [https://core.ac.uk/download/pdf/442636056.pdf Uni-Laval 2017] <!-- === Tensor models=== -->
 
+
# Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction [https://arxiv.org/pdf/1502.02330] (Semkin)
====Sp====
+
====Spherical Harmonics====
 
# Spherical Harmonics in p Dimensions [https://arxiv.org/abs/1205.3548 arxiv 2012]
 
# Spherical Harmonics in p Dimensions [https://arxiv.org/abs/1205.3548 arxiv 2012]
 
# Physics of simple pendulum a case study of nonlinear dynamics [https://www.researchgate.net/publication/332766499_Physics_of_simple_pendulum_a_case_study_of_nonlinear_dynamics RG 2008]
 
# Physics of simple pendulum a case study of nonlinear dynamics [https://www.researchgate.net/publication/332766499_Physics_of_simple_pendulum_a_case_study_of_nonlinear_dynamics RG 2008]
 
+
# Time series forecasting using manifold learning, 2021 [https://arxiv.org/pdf/2110.03625 arxiv]
====SSM====
+
# Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics [https://doi.org/10.1063/5.0094887 2022 Chaos AIP]
 +
====State Space Models====
 
# Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space [https://arxiv.org/abs/1804.01736 arxiv 2018]
 
# Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space [https://arxiv.org/abs/1804.01736 arxiv 2018]
 
+
====SSM Generative Models ====
====SSM+generative====
+
# Masked Autoregressive Flow for Density Estimation [https://arxiv.org/abs/1705.07057 arxiv 2017]
# ('''FLOW tex source''') Masked Autoregressive Flow for Density Estimation [https://arxiv.org/abs/1705.07057 arxiv 2017]
 
 
 
 
====SSM+Riemann+Gaussian process regression====
 
====SSM+Riemann+Gaussian process regression====
 
* Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics by Ioannis G. Kevrekidis,3 and Constantinos Siettos, 2022 [https://pubs.aip.org/aip/cha/article-pdf/doi/10.1063/5.0094887/16497596/083113_1_online.pdf pdf]
 
* Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics by Ioannis G. Kevrekidis,3 and Constantinos Siettos, 2022 [https://pubs.aip.org/aip/cha/article-pdf/doi/10.1063/5.0094887/16497596/083113_1_online.pdf pdf]
 
+
===Physics-Informed Neural Networks===
===PINN===
 
 
# Three ways to solve partial differential equations with neural networks — A review [https://arxiv.org/abs/2102.11802 arxiv 2021]
 
# Three ways to solve partial differential equations with neural networks — A review [https://arxiv.org/abs/2102.11802 arxiv 2021]
 
# NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data [https://arxiv.org/abs/1908.03190 arxiv 2019]
 
# NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data [https://arxiv.org/abs/1908.03190 arxiv 2019]
 
# Physics-based deep learning [https://www.physicsbaseddeeplearning.org/intro-teaser.html code]
 
# Physics-based deep learning [https://www.physicsbaseddeeplearning.org/intro-teaser.html code]
# PINN by Steve Burton [https://www.youtube.com/watch?v=g-S0m2zcKUg&list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa&index=3 yt]
+
# PINN by Steve Burton [https://www.youtube.com/watch?v=g-S0m2zcKUg&list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa&index=3 yt]<!-- ===5. PINN and Neural PDE=== -->
 
+
# Process Model Inversion in the Data-Driven Engineering Context for Improved Parameter Sensitivities [https://www.mdpi.com/2227-9717/10/9/1764 mdpi processes 2022] ('''nice connection pictures''')
===2. Riemmanian models===
+
# Physics-based Deep Learning [https://www.physicsbaseddeeplearning.org/intro.html github]
<!--No need to put CCM in this semester -->
+
# Integral Transforms in a Physics-Informed (Quantum) Neural Network setting [https://arxiv.org/pdf/2206.14184 arxiv 2022]
 
+
=== Riemmanian models===
 
+
# Riemannian Continuous Normalizing Flows [https://arxiv.org/abs/2006.10605 arxiv 2020]
==== SSA ====
+
# Residual Riemannian Networks [https://arxiv.org/pdf/2310.10013 arxiv 2023] <!--No need to put CCM in this semester -->
#
+
=== Continous time, Neural ODE===
 
+
# Neural Spatio-Temporal Point Processes ''by Ricky Chen et al.'' [https://arxiv.org/abs/2011.04583 iclr 2021] (likelihood for time and space)
 
 
 
 
==== Generative ====
 
# Riemannian Continuous Normalizing Flows [https://arxiv.org/abs/2006.10605 arxiv   2020]
 
 
 
===3. Neural ODE===
 
Neural Spatio-Temporal Point Processes ''by Ricky Chen et al.'' [https://arxiv.org/abs/2011.04583 iclr 2021] (likelihood for time and space)
 
 
# Neural Ordinary Differential Equations ''by Ricky Chen et al.'' [https://arxiv.org/abs/1806.07366 arxiv 2018]
 
# Neural Ordinary Differential Equations ''by Ricky Chen et al.'' [https://arxiv.org/abs/1806.07366 arxiv 2018]
 
# Neural Controlled Differential Equations for Irregular Time Series 'Patrick Kidger et al.'' [https://arxiv.org/abs/2005.08926 arxiv 2020][https://github.com/patrick-kidger/NeuralCDE github]
 
# Neural Controlled Differential Equations for Irregular Time Series 'Patrick Kidger et al.'' [https://arxiv.org/abs/2005.08926 arxiv 2020][https://github.com/patrick-kidger/NeuralCDE github]
Line 114: Line 166:
 
# (code tutorial) Deep Implicit Layers - Neural ODEs, Deep Equilibirum Models, and Beyond [https://implicit-layers-tutorial.org/ nips 2020]
 
# (code tutorial) Deep Implicit Layers - Neural ODEs, Deep Equilibirum Models, and Beyond [https://implicit-layers-tutorial.org/ nips 2020]
 
# (code tutorial) [https://www.physicsbaseddeeplearning.org/overview-ns-forw.html  2021]
 
# (code tutorial) [https://www.physicsbaseddeeplearning.org/overview-ns-forw.html  2021]
 
+
# Neural CDE and tensors [https://ieeexplore.ieee.org/abstract/document/9979806 IEEE], [https://ieeexplore.ieee.org/abstract/document/9533771 IEEE]
====CDE====
+
=== Graph and PDEs ===
Neural CDE and tensors
 
https://ieeexplore.ieee.org/abstract/document/9979806
 
https://ieeexplore.ieee.org/abstract/document/9533771
 
 
 
=== 4. Graph and PDEs ===
 
 
# Fourier Neural Operator for Parametric Partial Differential Equations [https://arxiv.org/abs/2010.08895 arxiv 2020]
 
# Fourier Neural Operator for Parametric Partial Differential Equations [https://arxiv.org/abs/2010.08895 arxiv 2020]
 
==supplimentary==
 
 
# Masked Attention is All You Need for Graphs [https://arxiv.org/abs/2402.10793 arxiv 2024]
 
# Masked Attention is All You Need for Graphs [https://arxiv.org/abs/2402.10793 arxiv 2024]
 
+
=== Neural SDE===
===4. Neural SDE===
 
 
# Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations [https://doi.org/10.3390/e19060280 mdpi entropy 2017] (great illustrations on the stochastic nature of a simple phase trajectory)
 
# Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations [https://doi.org/10.3390/e19060280 mdpi entropy 2017] (great illustrations on the stochastic nature of a simple phase trajectory)
 
# Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations [https://doi.org/10.3390/e19060280 mdpi entropy 2017] (great illustrations on the stochastic nature of a simple phase trajectory)
 
# Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations [https://doi.org/10.3390/e19060280 mdpi entropy 2017] (great illustrations on the stochastic nature of a simple phase trajectory)
Line 132: Line 176:
 
# Neural SDEs as Infinite-Dimensional GANs [https://arxiv.org/pdf/2102.03657 2021]
 
# Neural SDEs as Infinite-Dimensional GANs [https://arxiv.org/pdf/2102.03657 2021]
 
# Efficient and Accurate Gradients for Neural SDEs ''by Patrick Kidger'' [https://arxiv.org/pdf/2105.13493 arxiv 2021] code [https://docs.kidger.site/diffrax/examples/neural_sde/ diffrax]
 
# Efficient and Accurate Gradients for Neural SDEs ''by Patrick Kidger'' [https://arxiv.org/pdf/2105.13493 arxiv 2021] code [https://docs.kidger.site/diffrax/examples/neural_sde/ diffrax]
 
+
=== Chains and homology===
===5. PINN and Neural PDE===
 
# Process Model Inversion in the Data-Driven Engineering Context for Improved Parameter Sensitivities [https://www.mdpi.com/2227-9717/10/9/1764 mdpi processes 2022] ('''nice connection pictures''')
 
# Physics-based Deep Learning [https://www.physicsbaseddeeplearning.org/intro.html github]
 
 
 
===6. Chains and homology===
 
 
# Operator Learning: Algorithms and Analysis [https://arxiv.org/pdf/2402.15715 arxiv 2024]
 
# Operator Learning: Algorithms and Analysis [https://arxiv.org/pdf/2402.15715 arxiv 2024]
 +
# Hires weather: Operator realning [https://arxiv.org/pdf/2202.11214 arxiv 2022]
 
# Homotopy theory for beginners by J.M. Moeller [https://web.math.ku.dk/~moller/e01/algtopI/comments.pdf ku.dk 2015] (is it a pertinent link?)
 
# Homotopy theory for beginners by J.M. Moeller [https://web.math.ku.dk/~moller/e01/algtopI/comments.pdf ku.dk 2015] (is it a pertinent link?)
 
====To research====
 
 
# Explorations in Homeomorphic Variational Auto-Encoding [https://arxiv.org/abs/1807.04689 arxiv 2018]
 
# Explorations in Homeomorphic Variational Auto-Encoding [https://arxiv.org/abs/1807.04689 arxiv 2018]
 
# Special Finite Elements for Dipole Modelling ''master thesis Bauer'' [https://www.sci.utah.edu/~wolters/PaperWolters/2012/BauerMaster.pdf 2011]
 
# Special Finite Elements for Dipole Modelling ''master thesis Bauer'' [https://www.sci.utah.edu/~wolters/PaperWolters/2012/BauerMaster.pdf 2011]
 +
# Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology [https://arxiv.org/pdf/2302.03447v1 arxiv  2023] (denis)
  
 
===Appendix===
 
===Appendix===
Line 149: Line 188:
 
# An Elementary Introduction to Information Geometry ''by Frank Nielsen'' [An Elementary Introduction to Information Geometry Frank Nielsen [https://doi.org/10.3390/e22101100 mdpi entropy]
 
# An Elementary Introduction to Information Geometry ''by Frank Nielsen'' [An Elementary Introduction to Information Geometry Frank Nielsen [https://doi.org/10.3390/e22101100 mdpi entropy]
 
# The Many Faces of Information Geometry ''by Frank Nielsen'' [https://www.ams.org/journals/notices/202201/rnoti-p36.pdf ams 2022] (short version)
 
# The Many Faces of Information Geometry ''by Frank Nielsen'' [https://www.ams.org/journals/notices/202201/rnoti-p36.pdf ams 2022] (short version)
 +
# Geometric Clifford Algebra Networks [https://arxiv.org/abs/2302.06594 arxiv 3022]
 
# Clifford Algebras and Dimensionality Reduction for Signal Separation ''by [https://www.math.uni-hamburg.de/home/guillemard/ M. Guillemard]''  [https://www.math.uni-hamburg.de/home/guillemard/papers/clifford7.pdf Uni-Hamburg 2010][https://www.math.uni-hamburg.de/home/guillemard/clifford/ code]
 
# Clifford Algebras and Dimensionality Reduction for Signal Separation ''by [https://www.math.uni-hamburg.de/home/guillemard/ M. Guillemard]''  [https://www.math.uni-hamburg.de/home/guillemard/papers/clifford7.pdf Uni-Hamburg 2010][https://www.math.uni-hamburg.de/home/guillemard/clifford/ code]
 
# Special Finite Elements for Dipole Modelling ''by Martin Bauer'' Master Thesis [https://www.sci.utah.edu/~wolters/PaperWolters/2012/BauerMaster.pdf Erlangen 2012] diff p-form must read
 
# Special Finite Elements for Dipole Modelling ''by Martin Bauer'' Master Thesis [https://www.sci.utah.edu/~wolters/PaperWolters/2012/BauerMaster.pdf Erlangen 2012] diff p-form must read
Line 155: Line 195:
 
# At the Interface of Algebra and Statistics by ''T-D. Bradley'' [https://arxiv.org/abs/2004.05631 arxiv 2020]
 
# At the Interface of Algebra and Statistics by ''T-D. Bradley'' [https://arxiv.org/abs/2004.05631 arxiv 2020]
 
# Time Series Handbook by Borja, 2021 [https://github.com/phdinds-aim/time_series_handbook github]
 
# Time Series Handbook by Borja, 2021 [https://github.com/phdinds-aim/time_series_handbook github]
 +
# Physics-informed machine learning [https://www.nature.com/articles/s42254-021-00314-5 Nature reviews: Physics 2021]
 +
# Integral Transforms in a Physics-Informed (Quantum) Neural Network setting: Applications & Use-Cases [https://arxiv.org/abs/2206.14184 arxiv 2022]
 +
# Deep Efficient Continuous Manifold Learning for Time Series Modeling [https://arxiv.org/abs/2112.03379 arxiv 2021]
 +
 +
==Basics==
 +
Collection of wiki-links
 +
 +
===Signal Processing===
 +
#[https://en.wikipedia.org/wiki/Estimation_of_signal_parameters_via_rotational_invariance_techniques Estimation of signal parameters via rotational invariance techniques]
 +
#[https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space Reproducing kernel Hilbert space]
 +
#[https://en.wikipedia.org/wiki/Kernel_principal_component_analysis Kernel principal component analysis]
 +
#[https://en.wikipedia.org/wiki/Gram_matrix Gram matrix]
 +
#[https://en.wikipedia.org/wiki/Generalized_pencil-of-function_method Generalized pencil-of-function method]
 +
#[https://en.wikipedia.org/wiki/Wavelet_transform Wavelet transform]
 +
 +
===Differential Geometry===
 +
#[https://en.wikipedia.org/wiki/Pushforward_(differential) Pushforward (differential)]
 +
#[https://en.wikipedia.org/wiki/Pullback_bundle Ffibers, Bundles, Sheaves]
 +
#[https://en.wikipedia.org/wiki/Homology_(mathematics) Homology]
 +
#[https://en.wikipedia.org/wiki/Topological_data_analysis Topological data analysis]
 +
#[https://en.wikipedia.org/wiki/Conditional_mutual_information Conditional mutual information]
 +
#[https://en.wikipedia.org/wiki/Convergent_cross_mapping Convergent cross mapping]
 +
#[https://en.wikipedia.org/wiki/Differential_form Differential form]
 +
#[https://en.wikipedia.org/wiki/Total_derivative The total derivative as a differential form]
 +
#[https://en.wikipedia.org/wiki/Riemannian_manifold #Riemannian_metrics Riemannian_metrics]
 +
 +
===Probabilistical Decompisition===
 +
#[https://en.wikipedia.org/wiki/Wasserstein_metric Wasserstein metric]
 +
#[https://en.wikipedia.org/wiki/Mutual_information Mutual information]
 +
#[https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant Jacobian]
 +
#[https://en.wikipedia.org/wiki/Fisher_information Fisher information]
 +
# also see wiki dobrushin stratonovich wasserstein
 +
 +
===Tutoprials===
 +
# [https://www.connectedpapers.com/main/d86084808994ac54ef4840ae65295f3c0ec4decd/Physics%20informed-neural-networks%3A-A-deep-learning-framework-for-solving-forward-and-inverse-problems-involving-nonlinear-partial-differential-equations/graph Connected papers search]
 +
# Operator Learning via Physics-Informed DeepONet: Let’s Implement It From Scratch [https://towardsdatascience.com/operator-learning-via-physics-informed-deeponet-lets-implement-it-from-scratch-6659f3179887 Medium]
 +
 +
==Tools==
 +
# [https://github.com/ilkhem/icebeem icebeem]
 +
# [https://github.com/ilkhem/ivae ivae]
 +
# [https://github.com/kondratevakate/fmri-component-#  fmri-component]
 +
# analysis/blob/master/VAE_for_fMRI/dataset/train/Bystrova0_y-axis.png
 +
# [https://fr.mathworks.com/help/deeplearning/ug/dynamical-system-modeling-using-neural-ode.html Neural ODE in Matlab]

Latest revision as of 21:13, 22 November 2024

The chat-link

The statistical analysis of spatial time series requires additional methods of data analysis. First, we suppose time is continuous, put to the state space changes \(\frac{d\mathbf{x}}{dt}\) and use neural ordinary and stochastic differential equations. Second, we analyze a multivariate and multidimensional time series and use the tensor representation and tensor analysis. Third, since the time series have significant cross-correlation we model them in the Riemannian space. Fourth, medical time series are periodic, the base model is the pendulum model, \(\frac{d^2x}{dt^2}=-c\sin{x}\). We use physics-informed neural networks to approximate data. Fifth, the practical experiments involve multiple data sources. We use canonical correlation analysis with latent state space. This space aligns the source and target spaces and generates data in source and target manifolds.

Applications

This field of Machine Learning applies to any field where the measurements have continuous time and space data acquired from multimodal sources: climate modeling, neural interfaces, solid-state physics, electronics, fluid dynamics, and many more. We will carefully collect both the theory and its practice.

Course arrangement

Your profit

Your goal is to enhance your abilities to convey messages to the reader in the language of applied mathematics. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory.

Structure of a seminar

The semester lasts 10 weeks, and five couple of weeks for homework.

  • Odd week: introduction to the topic and handout of a theme for the homework.
  • Even week: a discussion of the essay, collecting the list of improvements to each essay.
  • Odd week: a discussion of the improved essay, putting the essays into a joint structure.

Scoring

Each essay brings one point, and each improvement brings one point. If an easy is perfect, no improvement is required, it counts as one plus one point. The threshold for binary decision is seven points.

The homework

The course gives two credits, so it requires time. The result is a two-page essay. It delivers an introduction to the designated topic. It could be automatically generated or collected from Wikipedia. The main requirement is that you be responsible for each statement of your essay. Each formula is yours.

The essay carries a comprehensive and strict answer to the topic question, illustrative plots are welcome. The result is ready to compile in a joint manuscript after the Even week. So please use the LaTeX template.

The style is the set theory, algebra, analysis, and Bayesian statistics. Category theory and homotopy theory are welcome.

This course gives you two credits, so it is 76/10 = 5 hours of weekly homework.

Templated and links

  • The course Git Hub to download the homework essays
  • The overleaf to compile the joint manuscript
  • The LaTeX template for an essay
  • The course chat to ask questions

Requirements for the text and the discussion

  1. Comprehensive explanation of the method or the question we discuss
  2. Only the principle, no experiments
  3. Two-page text (more or less)
  4. The reader is a second or third-year student
  5. The picture is obligatory
  6. However, a brief reference to some deep learning structure is welcome
  7. Talk could be a slide or a text itself
  8. The list of references with doi
  9. Tell how it was generated
  10. Observing a gap, put a note about it (to question later)

Style remarks for the essays

Automatic generation of mediocre-quality texts increased requirements for the quality of the new messages. It makes novelty rare and makes the authorship appreciated. But it simplifies the ways of delivering. So since textbook generation has become simple, we will use generative chats to train our skills of reader persuasion. The reader is our MS-thesis defense committee.

Additional remarks for clarification. Люди уже придумали все необходимое. Когда-то давно человечество развивалось очень бурно – постоянно менялись не только вещи, окружавшие людей, но и слова, которыми они пользовались. В те дни было много разных названий для творческого человека – инженер, поэт, ученый. И все они постоянно изобретали новое. Но это было детство человечества. А потом оно достигло зрелости. Творчество не исчезло - но оно стало сводиться к выбору из уже созданного. Говоря образно, мы больше не выращиваем виноград. Мы посылаем за бутылкой в погреб. Людей, которые занимаются этим, называют "сомелье". (В. Пелевин)

Avoid this style (reserved for the seminar)

  1. CCA comprehensive overwiev
  2. PCA tutorial

Table of homeworks

These ten weeks we discuss the next five topics:

  1. Multimodal data
  2. Continous time and space models
  3. Physics-informed models
  4. Multilinear models
  5. Riemannian spaces

Note that all these items enlighten stochastic-deterministic decomposition. So the questions include three parts:

  1. deterministic model,
  2. generative model,
  3. stochastic-deterministic decomposition method.

See the questions below for your reference.

Multimodal data

First series

  1. Canonical Correlation Analysis
  2. CCA in tensor representation
  3. Kernel CCA in Hilbert and L2[a,b] spaces
  4. CCA versus Cross-Attention Transformers
  5. Generative CCA, diffusion, and flow
  6. Comparative analysis of variants of CCA like PLS and others
  7. Functional PCA

Continous models

Second series

  1. Neural ODE
  2. Continous state space models
  3. Continous normalizing flows
  4. Ajoint method and continuous backpropagation
  5. Neural Delayed Differential Equations
  6. Neural PDE
  7. S4 and Hippo models
  8. Rimannian continuous models

Physics-Informed models

Third series

  1. PINNs as multimodels
  2. Spherical harmonics in p dimensions (an IMU example is welcome)
  3. PDF and Physics-Informed learning
  4. Integral Transforms in Physics-Informed learning

Multilinear models and topology

Fourth series

  1. Cliffort or Geometric algebra in machine learning
  2. Tensor models, tensor decomposition, and approximation (tensor PLS pr CCA)
  3. Machine learning models for tensors: Field Equation (Yang-Mills Equations_
  4. Machine learning models for theoretical physics (Maxwell’s Equations, Navier-Stocks)
  5. Persistent homology and dimensionality reduction (say, arXiv:2302.03447 with embedding delays)

Generative and Riemannian models

Fifth series

  1. Genertive Riemannian models. How do we extract and use the distribution?
  2. Gererative Canonical Correlation Analysis and its connection with the Riemannian spaces in the latent part
  3. Scoring-based Riemannian models. How do we extract and use the distribution?
  4. Generative convolutional models for tensors. Is there a continuous-time? (A variant is the Riemannian Residual Networks).
  5. Riemannian continuous normalizing flows. How do we generate a time series of a given distribution?

References

General

  1. Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems arxiv 2023
  2. Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning upenn 2024
  3. The Elements of Differentiable Programming arxiv 2024
  4. The list from the previous year 2023.

Prerequisites

  1. Understanding Deep Learning by Simon J.D. Prince mit 2023
  2. Deep Learning by C.M. and H. Bishops Springer 2024 (online version)
  3. A Geometric Approach to Differential Forms by David Bachman arxiv 2013
  4. Advanced Calculus: Geometric View by James J. Callahan pdf 2010, collection
  5. Geometric Deep Learning by Michael M. Bronstein arxiv 2021

Linear and bilinear models

  1. A Tutorial on Independent Component Analysis arxiv, 2014
  2. On the Stability of Multilinear Dynamical Systems arxiv 2022
  3. Tensor-based Regression Models and Applications by Ming Hou Thèse Uni-Laval 2017
  4. Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction [1] (Semkin)

Spherical Harmonics

  1. Spherical Harmonics in p Dimensions arxiv 2012
  2. Physics of simple pendulum a case study of nonlinear dynamics RG 2008
  3. Time series forecasting using manifold learning, 2021 arxiv
  4. Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics 2022 Chaos AIP

State Space Models

  1. Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space arxiv 2018

SSM Generative Models

  1. Masked Autoregressive Flow for Density Estimation arxiv 2017

SSM+Riemann+Gaussian process regression

  • Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics by Ioannis G. Kevrekidis,3 and Constantinos Siettos, 2022 pdf

Physics-Informed Neural Networks

  1. Three ways to solve partial differential equations with neural networks — A review arxiv 2021
  2. NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data arxiv 2019
  3. Physics-based deep learning code
  4. PINN by Steve Burton yt
  5. Process Model Inversion in the Data-Driven Engineering Context for Improved Parameter Sensitivities mdpi processes 2022 (nice connection pictures)
  6. Physics-based Deep Learning github
  7. Integral Transforms in a Physics-Informed (Quantum) Neural Network setting arxiv 2022

Riemmanian models

  1. Riemannian Continuous Normalizing Flows arxiv 2020
  2. Residual Riemannian Networks arxiv 2023

Continous time, Neural ODE

  1. Neural Spatio-Temporal Point Processes by Ricky Chen et al. iclr 2021 (likelihood for time and space)
  2. Neural Ordinary Differential Equations by Ricky Chen et al. arxiv 2018
  3. Neural Controlled Differential Equations for Irregular Time Series 'Patrick Kidger et al. arxiv 2020github
  4. Diffusion Normalizing Flow arxiv 2021
  5. Differentiable Programming for Differential Equations: A Review arxiv 2024
  6. (code tutorial) Deep Implicit Layers - Neural ODEs, Deep Equilibirum Models, and Beyond nips 2020
  7. (code tutorial) 2021
  8. Neural CDE and tensors IEEE, IEEE

Graph and PDEs

  1. Fourier Neural Operator for Parametric Partial Differential Equations arxiv 2020
  2. Masked Attention is All You Need for Graphs arxiv 2024

Neural SDE

  1. Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations mdpi entropy 2017 (great illustrations on the stochastic nature of a simple phase trajectory)
  2. Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations mdpi entropy 2017 (great illustrations on the stochastic nature of a simple phase trajectory)
  3. Neural SDEs for Conditional Time Series Generation arxiv 2023 code github LSTM - CSig-WGAN
  4. Neural SDEs as Infinite-Dimensional GANs 2021
  5. Efficient and Accurate Gradients for Neural SDEs by Patrick Kidger arxiv 2021 code diffrax

Chains and homology

  1. Operator Learning: Algorithms and Analysis arxiv 2024
  2. Hires weather: Operator realning arxiv 2022
  3. Homotopy theory for beginners by J.M. Moeller ku.dk 2015 (is it a pertinent link?)
  4. Explorations in Homeomorphic Variational Auto-Encoding arxiv 2018
  5. Special Finite Elements for Dipole Modelling master thesis Bauer 2011
  6. Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology arxiv 2023 (denis)

Appendix

  1. Neural Memory Networks stanford reports 2019
  2. An Elementary Introduction to Information Geometry by Frank Nielsen [An Elementary Introduction to Information Geometry Frank Nielsen mdpi entropy
  3. The Many Faces of Information Geometry by Frank Nielsen ams 2022 (short version)
  4. Geometric Clifford Algebra Networks arxiv 3022
  5. Clifford Algebras and Dimensionality Reduction for Signal Separation by M. Guillemard Uni-Hamburg 2010code
  6. Special Finite Elements for Dipole Modelling by Martin Bauer Master Thesis Erlangen 2012 diff p-form must read
  7. Bayesian model selection for complex dynamic systems 2018
  8. Visualizing 3-Dimensional Manifolds by Dugan J. Hammock 2013 umass
  9. At the Interface of Algebra and Statistics by T-D. Bradley arxiv 2020
  10. Time Series Handbook by Borja, 2021 github
  11. Physics-informed machine learning Nature reviews: Physics 2021
  12. Integral Transforms in a Physics-Informed (Quantum) Neural Network setting: Applications & Use-Cases arxiv 2022
  13. Deep Efficient Continuous Manifold Learning for Time Series Modeling arxiv 2021

Basics

Collection of wiki-links

Signal Processing

  1. Estimation of signal parameters via rotational invariance techniques
  2. Reproducing kernel Hilbert space
  3. Kernel principal component analysis
  4. Gram matrix
  5. Generalized pencil-of-function method
  6. Wavelet transform

Differential Geometry

  1. Pushforward (differential)
  2. Ffibers, Bundles, Sheaves
  3. Homology
  4. Topological data analysis
  5. Conditional mutual information
  6. Convergent cross mapping
  7. Differential form
  8. The total derivative as a differential form
  9. #Riemannian_metrics Riemannian_metrics

Probabilistical Decompisition

  1. Wasserstein metric
  2. Mutual information
  3. Jacobian
  4. Fisher information
  5. also see wiki dobrushin stratonovich wasserstein

Tutoprials

  1. Connected papers search
  2. Operator Learning via Physics-Informed DeepONet: Let’s Implement It From Scratch Medium

Tools

  1. icebeem
  2. ivae
  3. fmri-component
  4. analysis/blob/master/VAE_for_fMRI/dataset/train/Bystrova0_y-axis.png
  5. Neural ODE in Matlab