Difference between revisions of "BCI"
From Research management course
(→PPCA) |
|||
Line 12: | Line 12: | ||
==Linear models== | ==Linear models== | ||
===SSA, SVD, PCA=== | ===SSA, SVD, PCA=== | ||
+ | * non-parametric phase space Hankel matrix | ||
+ | * convoluion ? | ||
+ | * forecasting with SSA | ||
===Acceleroneter data=== | ===Acceleroneter data=== |
Revision as of 19:53, 25 March 2023
Contents
Brain-Computer Interfaces and Functional Data Analysis
This course is under construction. It enlightens fundamental mathematical concepts of brain signal analysis.
Each class combines five parts:
- Comprehensive introduction
- Practical example with code and homework
- Algebraic part of modeling
- Statistical part of modeling
- Join them in Hilbert (or any convenient) space
- Quiz for the next part (could be in the beginning) to show the theory to catch up
Linear models
SSA, SVD, PCA
- non-parametric phase space Hankel matrix
- convoluion ?
- forecasting with SSA
Acceleroneter data
- Energy
Tensor product and spectral decomposition
- vector, covector, dot product
- linear operator
- in Euclidean and (Hilbert space with useful example) dot product=bilinear form
- bilinear form
- factorization
- spectral decomposition
- SVD
- ??? SVD in Hilbert space
Why do we go from Eucledian to Hilbert space? Was: a vector as a number of measurements. Now it is a finite number of samples. Then it is a distribution of samples. The distribution is a point in the Hilbert space. We can make an inner product and tensor product of two and more distributions. Machine learning: given samples, multivariate distribution can be represented as a (direct?) sum of elements' tensor products.
PPCA
- PPCA
- Sampling principle
- VAE as PPCA encoder-decoder