NAACL 2024, June 20

Speakers

Keynote Speakers

University of California, Berkeley

Show It or Tell It? Text, Visualization, and their Combination

Abstract: In this talk, Dr. Marti Hearst will share observations about the role of language in information visualization. I will pose questions such as: how do we decide what to express via language vs via visualization? How do we choose what kind of text to use when creating visualizations, and does that choice matter? Does anyone prefer text over visuals, under what circumstances, and why?

Bio: Dr. Marti Hearst is the Interim Dean of the School of Information and a Professor at UC Berkeley in the School of Information and the Computer Science Division. Her research encompasses user interfaces with a focus on scientific document understanding,  information visualization with a focus on text, and computational linguistics. She is the author of Search User Interfaces, the first academic book on that topic.  She is past President of the Association of Computational Linguistics, an ACM Fellow, a member of the CHI Academy, a SIGIR Fellow,  and ACL Fellow, and has received four Excellence in Teaching Awards.

University of Pennsylvania Philadelphia & Amazon

Reasoning Myths about Language Models: 

What is Next?  

Abstract: The rapid progress made over the last few years in generating linguistically coherent natural language has blurred, in the minds of many, the difference between natural language generation, understanding, and the ability to reason with respect to the world. Nevertheless, robust support of high-level decisions that depend on natural language understanding, and that require dealing with “truthfulness” are still beyond our capabilities, partly because most of these tasks are very sparse, often require grounding, and may depend on new types of supervision signals.  

Dan will discuss some of the challenges underlying reasoning and argue that we should focus on LLMs as orchestrators – coordinating and managing multiple models, applications, and services, to execute complex tasks and processes. I will discuss some of the challenges and present some of our work in this space, focusing on supporting task decomposition and planning.

Bio: Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, University of Pennsylvania, a VP/Distinguished Scientist at AWS AI, and a Fellow of the AAAS, the ACM, AAAI, and the ACL.  In 2017 Roth was awarded the John McCarthy Award, the highest award the AI community gives to mid-career AI researchers. Roth was recognized “for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning.”  Roth has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory. He was the Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR), has served as the Program Chair for AAAI, ACL and CoNLL.  Prof. Roth received his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D. in Computer Science from Harvard University in 1995.

Invited Speakers

Stanford University

Training Social Skills via Human-AI Collaboration  

Abstract: Today, social skills are essential to success both on the job and in life.  However, practice environments for social skills are typically out of reach for most people. How can we make social skill training more available, accessible, and inviting? In this talk, I share two of our recent works on social skill training using large language models.  The first one explores how to empower therapists in learning therapy skills  with LLM-empowered feedback and the second one looks at training people with conflict resolution skills via simulated practices. Last but not least, we discuss risks, concerns and mitigation strategies related to LLMs-based simulation for social skill training. As a first step towards democratizing skill training, these works demonstrate how LLMs can empower individuals and foster  positive change.

Bio: Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of the Microsoft Research Faculty Fellowship (2021),  NSF CAREER Award (2022), ONR Young Investigator Award (2023), and Sloan Research Fellowship (2024).  Her work has received multiple paper awards or nominations at top NLP and HCI conferences (e.g., ACL, EMNLP, SIGCHI, and CSCW).

Model-Aided Human Annotation at Scale

Abstract: In this talk I survey some aspects of the use of model-assisted human annotation at Apple. In the first half of the talk, I discuss an implementation used by the Siri Natural Language Understanding team to support annotation of Siri traffic in over 40 locales, showing an average saving of almost 35% in time to task completion while maintaining a high level of accuracy. In the second half of the talk, I discuss a project which incorporates a Large Language Model into the annotation pipeline. Here, the model assisted with data sampling from a large corpus, supporting the creation of a benchmark of controversial topics. Although the model being used to support the annotation process has changed over time, some common threads emerge. The use of models to augment human annotation is cost-saving, but we must proceed with caution: Rely on the model in cases of higher confidence, but verify at least some subset of the data. Although LLM-generated content appears impressive, some results may be misleading.


Bio: Dr. Hadas Kotek is a senior data scientist on the Siri Natural Language Understanding team at Apple. She earned a PhD in Linguistics from MIT and previously held faculty positions at McGill University, New York University, and Yale University. Dr. Kotek develops methodologies for measuring the accuracy and efficiency of data annotation at scale, as well as the safety, robustness, and diversity of the resulting datasets and models, leveraging cross-functional teams to support innovative, product-centric research. Her most recent research is in the domains of model-in-the-loop annotation, ethical AI, and the efficacy of Large Language Models. In Fall 2023, she taught a full-semester seminar on Large Language Models at MIT, where she is currently a Research Affiliate.