Difference between revisions of "Functional Data Analysis"
m (→Your profit) |
|||
Line 8: | Line 8: | ||
===Your profit=== | ===Your profit=== | ||
− | Your goal is to enhance your abilities to ''' deliver messages to the reader in the language of applied mathematics '''. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory. | + | Your goal is to enhance your abilities to''' deliver messages to the reader in the language of applied mathematics'''. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory. |
===Structure of a seminar=== | ===Structure of a seminar=== |
Revision as of 17:18, 18 September 2024
Intelligent Data Analysis 2024
The statistical analysis of spatial time series requires additional methods of data analysis. First, we suppose time is continuous, put to the state space changes \(\frac{d\mathbf{x}}{dt}\) and use neural ordinary and stochastic differential equations. Second, we analyze a multivariate and multidimensional time series and use the tensor representation and tensor analysis. Third, since the time series have significant cross-correlation we model them in the Riemannian space. Fourth, medical time series are periodic, the base model is the pendulum model, \(\frac{d^2x}{dt^2}=-c\sin{x}\). We use physics-informed neural networks to approximate data. Fifth, the practical experiments involve multiple data sources. We use canonical correlation analysis with latent state space. This space aligns the source and target spaces and generates data in source and target manifolds.
Applications
This field of Machine Learning applies to any field where the measurements have continuous time and space data acquired from multimodal sources: climate modeling, neural interfaces, solid-state physics, electronics, fluid dynamics, and many more. We will carefully collect both the theory and its practice.
Your profit
Your goal is to enhance your abilities to deliver messages to the reader in the language of applied mathematics. The main part of your MS thesis work is the theoretical foundations of Machine Learning, where you present your personal results supported by the necessary theory.
Structure of a seminar
The semester has 10 weeks, and five couple of weeks for homework.
- Odd week: introduction to the topic and handout of a theme for the homework.
- Even week: a discussion of the essay, collecting the list of improvements to each essay.
- Odd week: a discussion of the improved essay, putting the essays into a joint structure.
Scoring
Each essay brings one point, and each improvement brings one point. If an easy is perfect, no improvement is required, it counts as one plus one point. The threshold for binary decision is seven points.
The homework
The course gives two credits, so it requires time. The result is a two-page essay. It delivers an introduction to the designated topic. It could be automatically generated or collected from Wikipedia. The main requirement is that you be responsible for each statement of your essay. Each formula is yours.
The essay carries a comprehensive and strict answer to the topic question, illustrative plots are welcome. The result is ready to be compiled in a joint manuscript after Even week. So please use the LaTeX template.
The style is the set theory, algebra, analysis, and Bayesian statistics. Category theory and homotopy theory are welcome.
Requirements for the text and the discussion
- Comprehensive explanation of the method or the question we discuss
- Only the principle, no experiments
- Two-page text (more or less)
- The reader is a second or third-year student
- The picture is obligatory
- However, a brief reference to some deep learning structure is welcome
- Talk could be a slide or a text itself
- The list of references with doi
- Tell how it was generated
- Observing a gap, put a note about it (to question later)
Style remarks for the essays
Automatic generation of mediocre-quality texts increased requirements for the quality of the new messages. It makes novelty rare and makes the authorship appreciated. But it simplifies the ways of delivering. So since textbook generation has become simple, we will use generative chats to train our skills of reader persuasion. The reader is our MS-thesis defense committee.
Table of contents
- Multimodal data
- Multilinear models
- Riemannian spaces
- Neural differential equations
- Generative models??
Multimodal data
First series of questions
- Canonical Correlation Analysis: forecasting model and loss function with variants
- Canonical Correlation Analysis in tensor representation
- CCA parameter estimation algorithm
- Connection CCA and Cross-Attention
- Generative CCA
- Comparative analysis of variants of CCA like PLS and others
General
- Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems arxiv 2023
- Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning upenn 2024
- The Elements of Differentiable Programming arxiv 2024
- The list from the previous year 2023.
Prerequisites
- Understanding Deep Learning by Simon J.D. Prince mit 2023
- Deep Learning by C.M. and H. Bishops Springer 2024 (online version)
- A Geometric Approach to Differential Forms by David Bachman arxiv 2013
- A Geometric Approach to Differential Forms by David Bachman arxiv 2013
1. Linear models
- A Tutorial on Independent Component Analysis arxiv, 2014
- On the Stability of Multilinear Dynamical Systems arxiv 2022
- Tensor-based Regression Models and Applications by Ming Hou Thèse Uni-Laval 2017
Sp
- Spherical Harmonics in p Dimensions arxiv 2012
- Physics of simple pendulum a case study of nonlinear dynamics RG 2008
SSM
- Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space arxiv 2018
SSM+generative
- (FLOW tex source) Masked Autoregressive Flow for Density Estimation arxiv 2017
SSM+Riemann+Gaussian process regression
- Time-series forecasting using manifold learning, radial basis function interpolation, and geometric harmonics by Ioannis G. Kevrekidis,3 and Constantinos Siettos, 2022 pdf
PINN
- Three ways to solve partial differential equations with neural networks — A review arxiv 2021
- NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data arxiv 2019
- Physics-based deep learning code
- PINN by Steve Burton yt
2. Riemmanian models
SSA
Generative
- Riemannian Continuous Normalizing Flows arxiv 2020
3. Neural ODE
Neural Spatio-Temporal Point Processes by Ricky Chen et al. iclr 2021 (likelihood for time and space)
- Neural Ordinary Differential Equations by Ricky Chen et al. arxiv 2018
- Neural Controlled Differential Equations for Irregular Time Series 'Patrick Kidger et al. arxiv 2020github
- Diffusion Normalizing Flow arxiv 2021
- Differentiable Programming for Differential Equations: A Review arxiv 2024
- (code tutorial) Deep Implicit Layers - Neural ODEs, Deep Equilibirum Models, and Beyond nips 2020
- (code tutorial) 2021
CDE
Neural CDE and tensors https://ieeexplore.ieee.org/abstract/document/9979806 https://ieeexplore.ieee.org/abstract/document/9533771
4. Graph and PDEs
- Fourier Neural Operator for Parametric Partial Differential Equations arxiv 2020
supplimentary
- Masked Attention is All You Need for Graphs arxiv 2024
4. Neural SDE
- Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations mdpi entropy 2017 (great illustrations on the stochastic nature of a simple phase trajectory)
- Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations mdpi entropy 2017 (great illustrations on the stochastic nature of a simple phase trajectory)
- Neural SDEs for Conditional Time Series Generation arxiv 2023 code github LSTM - CSig-WGAN
- Neural SDEs as Infinite-Dimensional GANs 2021
- Efficient and Accurate Gradients for Neural SDEs by Patrick Kidger arxiv 2021 code diffrax
5. PINN and Neural PDE
- Process Model Inversion in the Data-Driven Engineering Context for Improved Parameter Sensitivities mdpi processes 2022 (nice connection pictures)
- Physics-based Deep Learning github
6. Chains and homology
- Operator Learning: Algorithms and Analysis arxiv 2024
- Homotopy theory for beginners by J.M. Moeller ku.dk 2015 (is it a pertinent link?)
To research
- Explorations in Homeomorphic Variational Auto-Encoding arxiv 2018
- Special Finite Elements for Dipole Modelling master thesis Bauer 2011
Appendix
- Neural Memory Networks stanford reports 2019
- An Elementary Introduction to Information Geometry by Frank Nielsen [An Elementary Introduction to Information Geometry Frank Nielsen mdpi entropy
- The Many Faces of Information Geometry by Frank Nielsen ams 2022 (short version)
- Clifford Algebras and Dimensionality Reduction for Signal Separation by M. Guillemard Uni-Hamburg 2010code
- Special Finite Elements for Dipole Modelling by Martin Bauer Master Thesis Erlangen 2012 diff p-form must read
- Bayesian model selection for complex dynamic systems 2018
- Visualizing 3-Dimensional Manifolds by Dugan J. Hammock 2013 umass
- At the Interface of Algebra and Statistics by T-D. Bradley arxiv 2020
- Time Series Handbook by Borja, 2021 github