Colloquium - Lifu Huang, "Bridging the Modern Language Modeling with the Complex Open World"
From cs-speakerseries
From cs-speakerseries
Abstract:
Recent advancements in large language models (LLMs) have marked a significant milestone in the field of natural language processing. Yet, as we venture into the diverse and intricate terrain of the open world—encompassing varied topics, domains, and modalities—these models encounter formidable challenges. Key among these are issues like hallucination, grounded reasoning across multimodality, and the burden of high computational demands. My research aims to tackle these challenges, focusing on three broad themes: (1) knowledge acquisition and understanding in the dynamic open world; (2) generalizable and efficient intelligence; (3) enhancing human experience when interacting with these technologies. In this talk, I will mainly delve into the pursuit of generalizable and efficient intelligence. I'll first introduce our latest advancements in endowing models with a broader cognitive scope, evolving from answering simple questions to complex questions, and from understanding text to multimodality. Additionally, I will also highlight our recent research on addressing task interference, a crucial but usually overlooked issue, in the context of parameter-efficient tuning. The talk will conclude with discussions into our future directions under these three pivotal themes.
Bio:
Lifu Huang is an Assistant Professor in the Computer Science department at Virginia Tech. He obtained a PhD in Computer Science from University of Illinois at Urbana-Champaign in 2020. He has a wide range of research interests in natural language processing, machine learning and multimodal learning. His research has been recognized with an Outstanding Paper Award at ACL 2023 and Best Paper Award Honorable Mention at SIGIR 2023. He is a recipient of the NSF CAREER Award in 2023 and Amazon Research Award in 2021.
The morning of Wednesday 8 January, we updated the look and feel of Kaltura videos by moving from Player 2 to Player 7. The new player has a cleaner look, some improvements to controls, and a new transcript viewer. Also thanks to the upgrade we can now allow owners and co-editors of videos with two feeds to download *both* recordings from the mediaspace website.
As of the upgrade and going forward all videos play on mediaspace in the new player. Also since the upgrade any video you embed in Canvas or elsewhere, with the built-in tools in Canvas or the embed code from mediaspace, will play with the new player. However, any videos embedded anywhere before the update will continue to play in Player 2.
To take advantage of the new player, and use a fully-supported player, we urge you to re-embed videos in Canvas or elsewhere, retracing the steps you took the first time.
For more information on updating your links to take advantage of the new player: https://answers.uillinois.edu/illinois/146970 For more information on viewing media with the new player: https://answers.uillinois.edu/illinois/146972