About
This semester, we aim to understand and experiment on two closely related aspects of applying topology to deep learning: (1) topology of artificial neural network
(NN) architectures and (2) topological inputs for designing NNs. Among various topics related in a broad context, we place a continued emphasis on audio and speech
signal processing and, more generally, on time series analysis. We meet Thursdays at 2 pm in M4009.
Presentations and Discussions
Jan 19, '24,
Organizational meeting

Topology of Neural Network architectures: Feedforward NN $\to$ Recurrent NN $\to$ Reservoir Computing ($\to$ TimeDelayed RC) $\to$ engineering
implementation (photonic circuits?) ... CLDNN?

Topological inputs for designing NNs: Convolutional NN $\to$ Topological CNN, in the sense of
Carlsson et al. (from [Carlsson et al. 2008] to [Love et al. 2023])
 How are the above two aspects related?

Report on [Feng et al.] and what's next?
This research drew inspiration from Carlsson and his collaborators' discovery of the Kleinbottle distribution of highcontrast, local patches of
natural images [Carlsson et al. 2008], as well as their subsequent recent work on topological CNNs for learning image and even video data [Love et
al. 2023]. By analogy, we aim to understand a distribution space for speech data, even a directed graph structure on it modeling the complex
network of speechsignal sequences for practical purposes such as speaker diarisation, and how these topological inputs may enable smarter learning.
Other suggestions?
 Comparative nontopological machine learning experiment, input spectrograms instead of PD/MP: Done on Jan 25 and improved on Feb 22.
 How do weight vectors distribute in a CNN trained from the above item?
 Sequence: Attention mechanism
 librosa: audio and music processing in Python

Interpretable design of encoder–decoder bases from Dr. Yu's Dec 26, '23 talk on generative models for
speech processing
 Derivation of the Kleinfilter formulas in [Love et al. 2023]
Feb 29, '24,
Pingyao Feng,
How to get a (nontopological) CNN up and running
Review what we did for consonant recognition: source of human speech data, TopCap workflow, topological descriptors, etc.
How the comparative nontopological machine learning experiment was done: input spectrograms instead of PD/MP.
Slides
Mar 7, '24,
Zhiwang Yu,
How to get a topological CNN up and running
Mar 14, '24,
No seminar,
Happy $\pi$ Day!
Mar 21, '24,
Haiyu Zhang,
How to capture the topology of a CNN over the course of training
Mar 28, '24,
Zeyang Ding,
How to integrate persistent homology into graph neural networks
Apr 4, '24,
No seminar,
Qingming Festival break
Apr 11, '24,
Qingrui Qu,
Topological, categorical, statistical, and machine learning methods in natural language processing and computational number theory, I
Apr 18, '24,
Siheng Yi,
An introduction to multiparameter persistence
Apr 25, '24,
Zhiwang Yu and Haiyu Zhang,
Progress report on the topology of convolutional neural networks learning phonetic data
May 9, '24,
Qingrui Qu,
Topological, categorical, statistical, and machine learning methods in natural language processing and computational number theory, II
Jun 19, '24 (2 pm in M5024 / Tencent Meeting: 5208004008)
Pingyao Feng et al.,
Panel discussion/tutorial: Artificialintelligenceaided academic writing, and other applications of computer hardwares/softwares in mathematical study and research
References
Gunnar Carlsson, Topological methods for deep learning (Carlsson also gave several online lectures
on the topic.)
Ephy R. Love, Benjamin Filippenko, Vasileios Maroulas, and Gunnar Carlsson, Topological
convolutional layers for deep learning
Gunnar Carlsson and Rickard Brüel Gabrielsson, Exposition and interpretation of the topology of neural
networks
Gunnar Carlsson and Rickard Brüel Gabrielsson, Topological approaches to deep
learning
XingYue Duan, Xiong Ying, SiYang Leng, Jürgen Kurths, Wei Lin, and HuanFei Ma,
Embedding theory of reservoir computing and reducing reservoir
network using time delays
Yichen Shen, Nicholas C. Harris, Scott Skirlo et al., Deep learning with coherent nanophotonic
circuits
T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, Convolutional, long shortterm memory, fully
connected deep neural networks
Laura Alessandretti, Andrea Baronchelli, and YangHui He,
Machine learning meets number theory: The data science of
Birch–SwinnertonDyer
TaiDanae Bradley, Juan Luis Gastaldi, and John Terilla, The structure of meaning
in language: Parallel narratives in linear algebra and category theory
Chaolong Ying, Xinjian Zhao, and Tianshu Yu, Boosting graph pooling with persistent homology
Gunnar Carlsson and Mikael VejdemoJohansson, Topological data analysis with
applications