About
This semester, after weekly writing sessions on Wednesdays at 9 am till early April, our discussion resumes in May. We meet Thursdays at 4:20 pm in M5024.
Presentations and Discussions
May 8, '25,
Yifei Zhu,
Where we are now, and what's next
May 15, '25,
Zhiwang Yu,
Methods of algebraic topology in the study of neural network architecture
-
In this week's seminar, we will revisit the application of topological convolutional kernels in neural networks. We will explore the meaning of convolutional kernels, the differences between spectrograms and ordinary images, and the construction of convolutional kernels
tailored for speech signals. Additionally, we will discuss some existing challenges that require further improvement, as well as potential directions for future work.
- Slides
May 22, '25 in M616,
Haiyu Zhang,
Understanding neural networks: A perspective on representability and interpretability
-
In this talk, I will review the work on neural network approximation theory, explaining the roles of depth and width in function approximation and representability of different architectures(MLP, RNN, Transformer, etc.). Additionally, I'll present some results of
visualizing the training process of neural networks from geometric and topological perspectives, and discuss the grokking phenomenon.
- Slides
May 29, '25,
Zeyang Ding,
An application of topological data analysis to natural language processing
-
In this talk, I will introduce a novel detection framework based on TDA by Kushnareva et al., 2021. I will begin by showing how attention maps from Transformer-based language models can be
represented as weighted bipartite graphs and analyzed through TDA to capture the surface‐level and structural patterns that distinguish human‐written from machine‐generated texts. The framework extracts three interpretable sets of topological features – standard graph
invariants (connected components, cycles, edge counts), persistence‐based barcodes, and distances to canonical attention patterns – from frozen BERT attention heads. A simple linear classifier trained on these features outperforms count‐ and neural‐based baselines by up to
10% across social‐media, news, and product‐review domains, achieving performance on par with a fully fine‐tuned BERT model. Furthermore, I will show that the TDA‐based detector maintains superior robustness when applied to texts generated by larger, unseen GPT‐style
models, and probing analyses reveal its sensitivity to key surface and syntactic cues. Finally, I will also discuss how this idea can be employed in speech data.
- Slides
-
Related references:
Tulchinskii et al., Topological data analysis for speech processing, 2023
Barannikov et al., Representation topology divergence: A method for comparing neural network representations, 2022
Jun 5, '25 at 9 am,
Pingyao Feng,
Topological methods for signal variation detection
-
We will discuss different kinds of “variations” in signals, furthermore how topological methods can distinguish those variations, and possible future directions.
- Slides