Colloquium - Jay-Yoon Lee, "Automatically Capturing and Reflecting Latent Label Dependencies in Machine Learning Model"
From cs-speakerseries
From cs-speakerseries
Abstract:
In this talk, I will discuss recent work of automatically capturing latent dependencies of label space from data without explicit knowledge injection. When there exists multivariate dependencies on a large output space, it is nearly impossible to annotate them. I will first briefly introduce the concept of structured energy network models (SPENs) that can capture latent dependencies and then present last year’s NeurIPS publication that utilizes these Structured Energy networks As Loss functions (SEAL) to teach a simple feedforward network. We find that rather than using SPENs as a prediction network for inference, using it as a trainable loss function is not only computationally efficient but also results in higher performance with the examples of multi-label classification, semantic role labeling, and binary image segmentation. I will also briefly introduce the idea of using spatial representations that can capture latent label dependencies such as taxonomic and logical dependencies.
Bio:
Jay-Yoon Lee is an assistant professor in the Graduate School of Data Science at Seoul National University (SNU). His research interest primarily lies in injecting knowledge, and constraints into machine learning models using the tools of structured prediction, reinforcement learning, and multi-task learning. He has worked on injecting hard constraints and logical rules into neural NLP models during his Ph.D., and now he is expanding his research area towards automatically capturing constraints, human-interactive models, and science problems such as protein interaction. Prior to joining SNU, he conducted his postdoctoral research in the College of Information & Computer Sciences at UMass Amherst with Professor Andrew McCallum. Jay-Yoon received his Ph.D. in Computer Science in 2020 from Carnegie Mellon University where he was advised by Professor Jaime Carbonell and received his B.S. from KAIST in electrical engineering.
The morning of Wednesday 8 January, we updated the look and feel of Kaltura videos by moving from Player 2 to Player 7. The new player has a cleaner look, some improvements to controls, and a new transcript viewer. Also thanks to the upgrade we can now allow owners and co-editors of videos with two feeds to download *both* recordings from the mediaspace website.
As of the upgrade and going forward all videos play on mediaspace in the new player. Also since the upgrade any video you embed in Canvas or elsewhere, with the built-in tools in Canvas or the embed code from mediaspace, will play with the new player. However, any videos embedded anywhere before the update will continue to play in Player 2.
To take advantage of the new player, and use a fully-supported player, we urge you to re-embed videos in Canvas or elsewhere, retracing the steps you took the first time.
For more information on updating your links to take advantage of the new player: https://answers.uillinois.edu/illinois/146970 For more information on viewing media with the new player: https://answers.uillinois.edu/illinois/146972