Colloquium - Makoto Yamada, "Approximating the 1-Wasserstein Distance: Applications in Self-Supervised Learning and Beyond"
From cs-speakerseries 3/25/2025
views
comments
From cs-speakerseries 3/25/2025
Abstract:
In this seminar, we will first introduce an efficient approximation method for the Wasserstein distance [1]. While the Wasserstein distance is a powerful measure for comparing distributions, it is computationally expensive. To address this, we propose the Tree-Wasserstein Distance (TWD), which approximates the 1-Wasserstein distance using a tree structure. By employing L1-regularization to learn edge weights and formulating the distance approximation as a Lasso regression problem, we achieve efficient and globally optimal solutions. Then, we introduce the self-supervised learning based on Wasserstein distance and SimCLR [2]. Finally, we will discuss the intersection of self-supervised learning (SSL) and neuroscience theories [3]. Inspired by predictive coding and the temporal prediction hypothesis, we propose PhiNet, an extension of SimSiam with two predictors mimicking the CA3 and CA1 regions of the hippocampus. PhiNet demonstrates more stable representation learning and better adaptability in online and continual learning scenarios. This work suggests that the temporal prediction hypothesis provides a plausible model for understanding robust and adaptive learning in SSL.
[1] Makoto Yamada, Yuki Takezawa, Ryoma Sato, Han Bao, Zornitsa Kozareva, Sujith Ravi, Approximating 1-Wasserstein Distance with Trees, TMLR 2022
[2]Makoto Yamada, Yuki Takezawa, Guillaume Houry, Kira Michaela Dusterwald, Deborah Sulem, Han Zhao, Yao-Hung Hubert Tsai, An Empirical Study of Self-supervised Learning with Wasserstein Distance. Entropy 2024.
[3]Satoki Ishikawa, Makoto Yamada, Han Bao, Yuki Takezawa, PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis, ICLR 2025
Bio:
Makoto Yamada is currently an Associate Professor in the Machine Learning and Data Science Unit at the Okinawa Institute of Science and Technology (OIST), a leading and innovative university in Japan. He has been working on machine learning problems, particularly in explainable AI (XAI), self-supervised learning, and optimal transport, for over 10 years. His research has been published in top-tier machine learning and data mining venues such as NeurIPS, ICML, ICLR, and AISTATS. He has received several awards, including the Outstanding SPC Award from WSDM, the Best Paper Award at WSDM in 2016, and the Excellence Award from Yahoo Labs.