Difference between revisions of "Fundamental theorems"

From m1p.org
Jump to: navigation, search
 
(31 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Fundamental theorems of Machine learning''' [https://mipt.ru/english/edu/bachelor/ BS-4] MIPT course
+
{{#seo:
 +
|title=Fundamental Theorems of Machine Learning
 +
|titlemode=replace
 +
|keywords=Fundamental theorems of Machine Learning
 +
|description=The course Fundamental Theorems of Machine Learning studies techniques and practice of theorem formulations and proofs in machine learning.  
 +
}}
 +
To make the results of scientific research well-founded, introduce the techniques and practices of theorem formulations and proofs in machine learning.
  
==Motivation and syllabus==
+
Why does one need to convey an important message, a scientific result, as a theorem?
The goal of the course is to boost the quality of students bachelor and master thesis works; to make results of student scientific research well-founded. The course studies techniques and practice of theorem formulations and proofs in the field of machine learning.  
+
# Theorems are the most important messages in the field of research.
 +
# Theorems present results in the language of mathematics by generality and rigor.
 +
# Theorems are at the heart of mathematics and play a central role in its aesthetics.
  
Why one needs to convey an important message, a scientific result as a theorem?
+
Theorems present the message immediately and leave reasoning for later. The direct narration puts reason first and the results later.
# Theorem is a most important message in the field of research.  
+
* How does direct narration transform into fast narration?
# Theorem present results in the language of mathematics by virtue of generality and rigor.
+
* How to find, state, and prove theorems in our work?
# Theorems are at the heart of mathematics, they also play a central role in its aesthetics.
 
  
Theorems present the message immediately and left reasoning after. The direct narration put the reasoning first and left the results after.
+
Both narration styles refer to progressions
* How direct narration transform to fast narration?
+
# Textbook: Definition <math>\to</math> (Axiom set)  <math>\to</math> Theorem  <math>\to</math> Proof  <math>\to</math> Corollaries <math>\to</math> Examples <math>\to</math> Impact to applications
* How to find, state and prove theorems in our work?
+
# Scientific discovery: Application problems <math>\to</math> Problem generalisations  <math>\to</math>  Useful algebraic platform <math>\to</math>  Definitions <math>\to</math> Axiom set
 +
In practice, we mimic the first part of the progression, then learn to discover patterns and formulate theorems.
  
This course show both narration styles. It refers to our educational study and our work experience:
+
==Theorems of Machine Learning==
# Educational mimic progression
 
#* Definition <math>\to</math> (Axiom set)  <math>\to</math> Theorem  <math>\to</math> Proof  <math>\to</math> Corollaries <math>\to</math> Examples <math>\to</math> Impact to applications
 
# Scientific discovery progression
 
#* Application problems <math>\to</math> Problem generalisations  <math>\to</math>  Useful algebraic platform <math>\to</math>  Definitions <math>\to</math> Axiom set
 
So in our practice we mimic the first part of progression, and then we learn to discover patterns and formulate theorems. The theoretical talks give us series of good examples.
 
 
 
 
 
===Each lesson contains===
 
# A lecturer's talk on one of fundamental theorems (<math>40' = 30' + 10'</math> discussion)
 
# Two students' talks  (each <math>20' = 15' + 5'</math> discussion)
 
 
 
===Each student delivers two talks===
 
# On a theorem, which is formulated in a paper from the list of student thesis work's references
 
# On a theorem, which is formulated and proved by the student
 
 
 
===It is welcome to===
 
* Make variants of our own formulations and proofs
 
* Re-formulate significant messages of researchers and formulate these messages as theorems
 
 
 
===Plan of a talk===
 
# Introduction: the main message briefly
 
# If necessary (it could be introduced during the talk)
 
## Axiom sets
 
## Definitions
 
## Algebraic structures
 
## Notations
 
# Theorem formulation and exact proof
 
## The author's variant of the proof could be ameliorated
 
# Corollaries
 
# Theorem significance and applications
 
 
 
===Typography===
 
* As one (or two) text page [https://drive.google.com/file/d/17AcostCAVSKfgK52MAelsSy_dC-sxDR4/view?usp=sharing example], [https://www.overleaf.com/read/wsmczggkzpgj template to download]
 
* Please
 
** set the font size <math>\geqslant 14</math>pt
 
** include plots, diagrams, freehand drawings
 
 
 
===The organization===
 
* GitHub project to upload your text [https://github.com/Intelligent-Systems-Phystech/FundamentalTheoremsML  Intelligent-Systems-Phystech/FundamentalTheoremsML] to the group folder upload the pdf, tex, fig files named as Surname2021Literature, Surname2021Research
 
* See the Youtube channel [https://www.youtube.com/channel/UC90B3Y_FbBRrRQk5TCiKgSA Machine Learning]
 
* Spring semester, Wednesdays 14:30 at Zoom m1p.org/go_zoom
 
 
 
===Scoring===
 
* Talks and text 0-4 points, according to comparison
 
* Out-of-schedule drops a half
 
* The exam 2 points: schemes of proof of various theorems
 
** time-limit test (as Physics state exam) and discussion
 
** theorem formulation and poof scheme are hand-written
 
** two random theorems from the list below, 10 min to write the text
 
 
 
==Lecture topics==
 
 
# Fundamental theorem of linear algebra [https://www.engineering.iastate.edu/~julied/classes/CE570/Notes/strangpaper.pdf S]
 
# Fundamental theorem of linear algebra [https://www.engineering.iastate.edu/~julied/classes/CE570/Notes/strangpaper.pdf S]
 
# Singular values decomposition and spectral theorem [https://en.wikipedia.org/wiki/Spectral_theorem W]
 
# Singular values decomposition and spectral theorem [https://en.wikipedia.org/wiki/Spectral_theorem W]
Line 72: Line 29:
 
# Kolmogorov–Arnold representation theorem [https://en.wikipedia.org/wiki/Kolmogorov–Arnold_representation_theorem W]
 
# Kolmogorov–Arnold representation theorem [https://en.wikipedia.org/wiki/Kolmogorov–Arnold_representation_theorem W]
 
# Universal approximation theorem by Cybenko [https://en.wikipedia.org/wiki/Universal_approximation_theorem W]
 
# Universal approximation theorem by Cybenko [https://en.wikipedia.org/wiki/Universal_approximation_theorem W]
# Deep neural network theorem  
+
# Deep neural network theorem [https://github.com/MarkPotanin/GeneticOpt/blob/master/Potanin2019NNStructure_APX.pdf Mark]
 +
# Inverse function theorem and Jacobian [https://en.wikipedia.org/wiki/Inverse_function_theorem W]
 
# No free lunch theorem by Wolpert [https://en.wikipedia.org/wiki/No_free_lunch_theorem W]
 
# No free lunch theorem by Wolpert [https://en.wikipedia.org/wiki/No_free_lunch_theorem W]
 
# RKHS by Aronszajn and Mercer's theorem [https://en.wikipedia.org/wiki/Mercer%27s_theorem W]
 
# RKHS by Aronszajn and Mercer's theorem [https://en.wikipedia.org/wiki/Mercer%27s_theorem W]
Line 88: Line 46:
 
# Multi-armed bandit theorem
 
# Multi-armed bandit theorem
 
# Copulas and Sklar's theorem [https://en.wikipedia.org/wiki/Copula_(probability_theory) W]
 
# Copulas and Sklar's theorem [https://en.wikipedia.org/wiki/Copula_(probability_theory) W]
 +
# Boosting theorem Freud, Shapire, 1996, 1995
 +
# Bootstrap theorem (statistical estimations): Ergodic theorem
 +
# Miscellaneous [http://www.machinelearning.ru/wiki/images/3/33/BershteinFonMises.pdf BershteinFonMises-1], [http://www.machinelearning.ru/wiki/images/3/33/BershteinFonMises.pdf BershteinFonMises-2], PАС_learning (compression induces learning), [http://www.machinelearning.ru/wiki/images/b/ba/PAC_learning_compress.pdf PAC_learning_compress]
  
==Theorem types==
+
===Theorem types===
 
<!--* Должна быть показана связь между различными областями машинного обучения
 
<!--* Должна быть показана связь между различными областями машинного обучения
 
* Вероятность, обоснованность, порождение и выбор, корректность по Адамару, снижение размерности, сходимость алгоритмов -->
 
* Вероятность, обоснованность, порождение и выбор, корректность по Адамару, снижение размерности, сходимость алгоритмов -->
 
* Uniqueness, existence  
 
* Uniqueness, existence  
 
* Universality
 
* Universality
* Convergence [https://www.youtube.com/watch?v=Ajar_6MAOLw]  
+
* Convergence[https://www.youtube.com/watch?v=Ajar_6MAOLw YouTube]
 
<!--Поточечно  
 
<!--Поточечно  
 
**Равномерно
 
**Равномерно
Line 119: Line 80:
 
* Bounds
 
* Bounds
  
==Schedule==
+
=== A paper with theorems includes===
Spring semester 2021
+
# Introduction: the main message briefly
===Student talks===
+
# If necessary (it could be introduced during the talk)
{|class = "wikitable"
+
## Axiom sets
|-
+
## Definitions
! Speaker
+
## Algebraic structures
! References
+
## Notations
! Thesis work
+
# Theorem formulation and exact proof
|-
+
## The author's variant of the proof could be ameliorated
| Bishuk Anton
+
# Corollaries
| 17.2 [https://github.com/ApostolAnt/Projects/blob/master/______.pdf link]
+
# Theorem significance and applications
| 31.3 link
 
|-
 
| Weiser Kirill
 
| 17.2 [https://github.com/Nerkan78/IntelligentSystems/blob/main/Diploma/VayserKirill2020/MatheronRule.pdf link]
 
| 31.3 [https://github.com/Nerkan78/IntelligentSystems/blob/main/Diploma/VayserKirill2020/ErrorAnalysis.pdf link]
 
|-
 
| Grebenkova Olga
 
| 24.2 [https://github.com/Intelligent-Systems-Phystech/Grebenkova-BS-Thesis/raw/main/ELBo.pdf link]
 
| 7.4 link
 
|-
 
| Gunaev Ruslan
 
| 24.2 [https://github.com/Gunaev/Gunaev_BS-thesis/blob/main/th_diplom.pdf link]
 
| 7.4 link
 
|-
 
| Zholobov Vladimir
 
| 3.3 [https://github.com/Intelligent-Systems-Phystech/Zholobov-BS-Thesis/blob/main/Zholobov_thesis.pdf link]
 
| 14.4 link
 
|-
 
| Islamov Rustem
 
| 3.3 [https://github.com/Intelligent-Systems-Phystech/Islamov-BS-Thesis/blob/main/Fundamental%20theorems%20on%20Machine%20Learning/First%20report/Stochastic%20Newton%20method.pdf link]
 
| 14.4 link
 
|-
 
| Pankratov Victor
 
| 10.3 [https://github.com/Intelligent-Systems-Phystech/Pankratov_BS_Thesis/blob/main/link1.pdf link]
 
| 21.4 link
 
|-
 
| Savelyev Nikolay
 
| 10.3 [https://github.com/Intelligent-Systems-Phystech/Savelev-BS-Thesis/raw/main/Prediction_Learning_and_Games-B-18-21.pdf link]
 
| 21.4 link
 
|-
 
| Filatov Andrey
 
| 10.3 [https://github.com/Intelligent-Systems-Phystech/Filatov-BS-Thesis/blob/main/Fundamental%20Theorems/Theorem.pdf link]
 
| 21.4 link
 
|-
 
| Filippova Anastasia
 
| 17.3 link
 
| 28.4 link
 
|-
 
| Khar Alexandra
 
| 17.3 [https://github.com/Intelligent-Systems-Phystech/Khar-BS-Thesis/blob/main/otchet_1.pdf link]
 
| 28.4 link
 
|-
 
| Khristolyubov Maxim
 
| 24.3 [https://github.com/Intelligent-Systems-Phystech/Khristolyubov-BS-Thesis/blob/main/paper/Proof_of_the_theorem.pdf link]
 
| 5.5 link
 
|-
 
| Shokorov Vyacheslav
 
| 24.3 [https://github.com/Intelligent-Systems-Phystech/Shokorov-BS-Thesis/blob/main/report/VKR_Theorem.pdf link]
 
| 5.5 link
 
|-
 
|}
 
 
 
===Invited talks===
 
{|class="wikitable"
 
|-
 
<!--! Дата
 
! Тема-->
 
! Speaker
 
! Link
 
|-
 
<!--|10 февраля
 
|Вводное занятие (и Основная теорема статистики)-->
 
| Strijov, Potanin
 
|10.2 [https://drive.google.com/file/d/17AcostCAVSKfgK52MAelsSy_dC-sxDR4/view?usp=sharing link]
 
|-
 
<!--|17 февраля
 
|Теорема сходимости перцептрона Ф.Розенблатта, Блока, Джозефа, Кестена-->
 
| Mark Potanin
 
|17.2 [https://drive.google.com/file/d/1Pu8mvexKkO45ED4MWSH-sZDusNNTgMpC/view?usp=sharing link]
 
|-
 
<!--|24 февраля
 
|Теоремы Колмогорова и Арнольда, теорема об универсальном аппроксиматоре Цыбенко, теорема о глубоких нейросетях -->
 
|Mark Potanin
 
|24.2 [https://drive.google.com/file/d/1Thm73TYyLXhoHNA_4uhyFB9Im26Ctjxp/view?usp=sharing link]
 
|-
 
<!--|10 марта
 
|[[Media:BershteinFonMises.pdf|Берштейн - фон Мизес]]-->
 
|Andriy Grabovyi
 
|10.3 [http://www.machinelearning.ru/wiki/images/3/33/BershteinFonMises.pdf link]
 
|-
 
<!--|17 марта
 
|[[Media:BershteinFonMises.pdf|Берштейн - фон Мизес]] (продолжение)-->
 
|Andriy Grabovyi
 
|17.3 [http://www.machinelearning.ru/wiki/images/3/33/BershteinFonMises.pdf link]
 
|-
 
<!--|24 марта
 
|[[Media:PAC_learning_compress.pdf|РАС обучаемость, теорема о том, что сжатие предполагает обучаемость]]-->
 
|Andriy Grabovyi
 
|24.3 [http://www.machinelearning.ru/wiki/images/b/ba/PAC_learning_compress.pdf link]
 
|-
 
<!--|31 марта 
 
|Сходимость про вероятности при выборе моделей-->
 
|Mark Potanin
 
|31.3 [https://drive.google.com/file/d/1-rtOJtjivRs0TwOga8-MLaBEzCcUyD0H/view?usp=sharing link]
 
|-
 
<!--|7 апреля
 
|Теорема о минимальной длине описания --><!--Метрические пространства: RKHS Аронжайн, теорема Мерсера-->
 
|Oleg Bakhteev <!--Алексей Гончаров-->
 
|7.4 link
 
|-
 
<!--|14 апреля
 
|Теорема о свертке (Фурье, свертка, автокорреляция) с примерами сверточных сетей -->
 
|Philipp Nikitin
 
|14.4 link
 
|-
 
<!--|21 апреля
 
|Representer theorem, Schölkopf, Herbrich, and Smola -->
 
|Andriy Grabovyi
 
|21.4 link
 
|-
 
<!--|28 апреля
 
|Обратная теорема Фурье, теорема Парсеваля (равномерная и неравномерная сходимость)
 
|Philipp Nikitin
 
|28.4 link
 
|-
 
<!--|5 мая
 
|Вариационная аппроксимация, теорема о байесовском выборе моделей-->
 
|Oleg Bakhteev
 
|5.5 link
 
|-
 
<!--|12 мая
 
|Разбор и обсуждение письменных работ: теоремы их доказательства (входящие в диплом)-->
 
| Potanin, Strijov
 
|12.5 Discussion
 
|-
 
<!--|26 мая
 
|Экзамен: схемы доказательства различных теорем (тест на время, как в гос по физике, и обсуждение)-->
 
|Potanin, Aduenko, Bakhteev
 
|26.5 Exam
 
|-
 
<!--|
 
|Теорема о бесплатных обедах в машинном обучении, Волперт
 
|Радослав Нейчев
 
|
 
|-
 
|
 
|Теорема схем, Холланд
 
|Радослав Нейчев
 
|
 
|--->
 
|}
 
 
 
===Out of schedule ===
 
# Three works by Greg Yang [https://arxiv.org/pdf/1910.12478.pdf arXiv:1910.12478], [https://arxiv.org/pdf/2006.14548 arXiv:2006.14548], [https://arxiv.org/pdf/2009.10685.pdf arXiv:2009.10685] [https://www.youtube.com/watch?v=kc9ll6B-xVU&list=PLt1IfGj6-_-ewBQJDVMJOJNlW5AbY6D3p&index=4&fbclid=IwAR3kIUQZWsh9j_Xp2TYb5ZmcsH7nFDIpCuRnmeoxoRJyPuxKvFyxTRI3ypY Youtube Rus]
 
# Theorems on flows by Johann Brehmera and Kyle Cranmera [https://arxiv.org/pdf/2003.13913v2.pdf arXiv:2003.13913v2]
 
  
 
==References==
 
==References==
 
+
===Principles===
 
# Mathematical statistics by A.A. Borovkov, 1998
 
# Mathematical statistics by A.A. Borovkov, 1998
 
# [https://www.di.ens.fr/~fbach/ltfp_book.pdf Learning Theory from First Principles] by Francis Bach, 2021 <!--https://www.di.ens.fr/~fbach/learning_theory_class/index.html-->
 
# [https://www.di.ens.fr/~fbach/ltfp_book.pdf Learning Theory from First Principles] by Francis Bach, 2021 <!--https://www.di.ens.fr/~fbach/learning_theory_class/index.html-->
Line 295: Line 111:
 
===Methodology===
 
===Methodology===
 
# [http://eqworld.ipmnet.ru/ru/library/books/Klini1957ru.djvu Introduction to Metamathematics] by Stephen Cole Kleene, 1950
 
# [http://eqworld.ipmnet.ru/ru/library/books/Klini1957ru.djvu Introduction to Metamathematics] by Stephen Cole Kleene, 1950
# Science and Method by Henry Poincare, 1908
+
# Science and Method by Henri Poincaré, 1908
 
# A Summary of Scientific Method by Peter Kosso, 2011
 
# A Summary of Scientific Method by Peter Kosso, 2011
 
# Being a Researcher: An Informatics Perspective by Carlo Ghezzi, 2020
 
# Being a Researcher: An Informatics Perspective by Carlo Ghezzi, 2020
Line 302: Line 118:
 
# [https://en.wikipedia.org/wiki/List_of_mathematical_jargon List of mathematical jargon] on Wikipedia  
 
# [https://en.wikipedia.org/wiki/List_of_mathematical_jargon List of mathematical jargon] on Wikipedia  
 
# [https://cs9.pikabu.ru/post_img/big/2018/05/21/9/1526915408141416733.jpg Пикабу. Типичные методы доказательства, 2018] (если вы чувствуете, что несет не туда)
 
# [https://cs9.pikabu.ru/post_img/big/2018/05/21/9/1526915408141416733.jpg Пикабу. Типичные методы доказательства, 2018] (если вы чувствуете, что несет не туда)
 +
 +
=== Supplementary material===
 +
# Three works by Greg Yang [https://arxiv.org/pdf/1910.12478.pdf arXiv:1910.12478], [https://arxiv.org/pdf/2006.14548 arXiv:2006.14548], [https://arxiv.org/pdf/2009.10685.pdf arXiv:2009.10685] [https://www.youtube.com/watch?v=kc9ll6B-xVU&list=PLt1IfGj6-_-ewBQJDVMJOJNlW5AbY6D3p&index=4&fbclid=IwAR3kIUQZWsh9j_Xp2TYb5ZmcsH7nFDIpCuRnmeoxoRJyPuxKvFyxTRI3ypY Youtube Rus]
 +
# Theorems on flows by Johann Brehmera and Kyle Cranmera [https://arxiv.org/pdf/2003.13913v2.pdf arXiv:2003.13913v2]
 +
* GitHub project to upload your text [https://github.com/Intelligent-Systems-Phystech/FundamentalTheoremsML  Intelligent-Systems-Phystech/FundamentalTheoremsML]
 +
 +
 +
<!-- Each class contains a lecturer's talk on one of the fundamental theorems (<math>40' = 30' + 10'</math> discussion) and two students' talks  (each <math>20' = 15' + 5'</math> discussion). Each student delivers two talks: on a theorem, which is formulated in a paper from the list of student thesis works' references, and on a theorem, which is formulated and proved by the student.
 +
 +
It is welcome to: make variants of our formulations and proofs, and re-formulate significant messages of researchers, and formulate these messages as theorems. -->

Latest revision as of 16:08, 8 February 2026

To make the results of scientific research well-founded, introduce the techniques and practices of theorem formulations and proofs in machine learning.

Why does one need to convey an important message, a scientific result, as a theorem?

  1. Theorems are the most important messages in the field of research.
  2. Theorems present results in the language of mathematics by generality and rigor.
  3. Theorems are at the heart of mathematics and play a central role in its aesthetics.

Theorems present the message immediately and leave reasoning for later. The direct narration puts reason first and the results later.

  • How does direct narration transform into fast narration?
  • How to find, state, and prove theorems in our work?

Both narration styles refer to progressions

  1. Textbook: Definition \(\to\) (Axiom set) \(\to\) Theorem \(\to\) Proof \(\to\) Corollaries \(\to\) Examples \(\to\) Impact to applications
  2. Scientific discovery: Application problems \(\to\) Problem generalisations \(\to\) Useful algebraic platform \(\to\) Definitions \(\to\) Axiom set

In practice, we mimic the first part of the progression, then learn to discover patterns and formulate theorems.

Theorems of Machine Learning

  1. Fundamental theorem of linear algebra S
  2. Singular values decomposition and spectral theorem W
  3. Gauss–Markov-(Aitken) theorem W
  4. Principal component analysis W
  5. Karhunen–Loève theorem W
  6. Kolmogorov–Arnold representation theorem W
  7. Universal approximation theorem by Cybenko W
  8. Deep neural network theorem Mark
  9. Inverse function theorem and Jacobian W
  10. No free lunch theorem by Wolpert W
  11. RKHS by Aronszajn and Mercer's theorem W
  12. Representer theorem by Schölkopf, Herbrich, and Smola W
  13. Convolution theorem (FT, convolution, correlation with CNN examples) W
  14. Fourier inversion theorem W
  15. Wiener–Khinchin theorem about autocorrelation and spectral decomposition W
  16. Parseval's theorem (and uniform, non-uniform convergence) W
  17. Probably approximately correct learning with the theorem about compression means learnability
  18. Bernstein–von Mises theorem W
  19. Holland's schema theorem W
  20. Variational approximation
  21. Convergence of random variables and Kloek's theorem W
  22. Exponential family of distributions and Nelder's theorem
  23. Multi-armed bandit theorem
  24. Copulas and Sklar's theorem W
  25. Boosting theorem Freud, Shapire, 1996, 1995
  26. Bootstrap theorem (statistical estimations): Ergodic theorem
  27. Miscellaneous BershteinFonMises-1, BershteinFonMises-2, PАС_learning (compression induces learning), PAC_learning_compress

Theorem types

  • Uniqueness, existence
  • Universality
  • ConvergenceYouTube
  • Complexity
  • Properties of estimations
  • Bounds

A paper with theorems includes

  1. Introduction: the main message briefly
  2. If necessary (it could be introduced during the talk)
    1. Axiom sets
    2. Definitions
    3. Algebraic structures
    4. Notations
  3. Theorem formulation and exact proof
    1. The author's variant of the proof could be ameliorated
  4. Corollaries
  5. Theorem significance and applications

References

Principles

  1. Mathematical statistics by A.A. Borovkov, 1998
  2. Learning Theory from First Principles by Francis Bach, 2021
  3. Theoretical foundations of potential function method in pattern recognition by M. A. Aizerman, E. M. Braverman, and L. I. Rozonoer // Avtomatica i Telemekhanica, 1964. Vol. 25, pp. 917-936.

Proof techniques

  1. Proofs and Mathematical Reasoning by Agata Stefanowicz, 2014
  2. The nuts and bolts of proofs by Antonella Cupillari, 2013
  3. Theorems, Corollaries, Lemmas, and Methods of Proof by Richard J. Rossi, 1956
  4. Problem Books in Mathematics by P.R. Halmos (editor), 1990
  5. Les contre-exemples en mathématique par Bertrand Hauchecorne, 2007
  6. Kolmogorov and Mathematical Logic by Vladimir A. Uspensky // The Journal of Symbolic Logic, Vol. 57, No. 2 (Jun., 1992), 385-412.
  7. Что такое аксиоматический метод? В.А. Успенский, 2001
  8. Аксиоматический метод. Е.Е. Золин, 2015

Methodology

  1. Introduction to Metamathematics by Stephen Cole Kleene, 1950
  2. Science and Method by Henri Poincaré, 1908
  3. A Summary of Scientific Method by Peter Kosso, 2011
  4. Being a Researcher: An Informatics Perspective by Carlo Ghezzi, 2020
  5. The definitive glossary of higher mathematical jargon by Math Vault, 2015
  6. The definitive guide to learning higher mathematics: 10 principles to mathematical transcendence by Math Vault, 2020
  7. List of mathematical jargon on Wikipedia
  8. Пикабу. Типичные методы доказательства, 2018 (если вы чувствуете, что несет не туда)

Supplementary material

  1. Three works by Greg Yang arXiv:1910.12478, arXiv:2006.14548, arXiv:2009.10685 Youtube Rus
  2. Theorems on flows by Johann Brehmera and Kyle Cranmera arXiv:2003.13913v2