Difference between revisions of "BCI"

From Research management course
Jump to: navigation, search
Line 34: Line 34:
 
===PPCA===
 
===PPCA===
 
* [https://www.tensorflow.org/probability/examples/Probabilistic_PCA#:~:text=Probabilistic%20principal%20components%20analysis%20(PCA,data%20or%20for%20multidimensional%20scaling. PPCA]
 
* [https://www.tensorflow.org/probability/examples/Probabilistic_PCA#:~:text=Probabilistic%20principal%20components%20analysis%20(PCA,data%20or%20for%20multidimensional%20scaling. PPCA]
 +
* How to tell stochastic from deterministic variable? Are expectation and variance deterministic?
 +
* Recap: joint and conditional distribution, marginalization.
 
* Sampling principle
 
* Sampling principle
 
* VAE as PPCA encoder-decoder
 
* VAE as PPCA encoder-decoder

Revision as of 00:13, 28 March 2023

Brain-Computer Interfaces and Functional Data Analysis

This course is under construction. It enlightens fundamental mathematical concepts of brain signal analysis.

Each class combines five parts:

  1. Comprehensive introduction
  2. Practical example with code and homework
  3. Algebraic part of modeling
  4. Statistical part of modeling
  5. Join them in Hilbert (or any convenient) space
  6. Quiz for the next part (could be in the beginning) to show the theory to catch up

Linear models

SSA, SVD, PCA

  • non-parametric phase space Hankel matrix
  • convoluion ?
  • forecasting with SSA

Acceleroneter data

  • Energy


Tensor product and spectral decomposition

  • vector, covector, dot product
  • linear operator
  • in Euclidean and (Hilbert space with useful example) dot product=bilinear form
  • bilinear form
  • factorization
  • spectral decomposition
  • SVD
  •  ??? SVD in Hilbert space

Why do we go from Eucledian to Hilbert space? Was: a vector as a number of measurements. Now it is a finite number of samples. Then it is a distribution of samples. The distribution is a point in the Hilbert space. We can make an inner product and tensor product of two and more distributions. Machine learning: given samples, multivariate distribution can be represented as a (direct?) sum of elements' tensor products.

PPCA

  • PPCA
  • How to tell stochastic from deterministic variable? Are expectation and variance deterministic?
  • Recap: joint and conditional distribution, marginalization.
  • Sampling principle
  • VAE as PPCA encoder-decoder

Introduction to BCI

Decoding problem

Models of BCI

References