Kristina Lerman

Cyberpsychology: Emotion as the Hidden Driver of Social Behavior in Online Networks

Bio: Kristina Lerman is a Professor of Informatics at Indiana University’s Luddy School of Informatics, Computing and Engineering and a fellow of the AAAI. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her work on modeling and understanding human behavior online social networks has been covered by the Washington Post, Wall Street Journal, and The Atlantic.

Abstract: Emotions shape every aspect of social life, yet their role in digital communication has been under-explored. Research on social media has largely focused on how people share and consume information online, while comparatively less attention has been paid to how emotions organize attention, shape identity, and drive collective behavior. This talk advocates the idea of social media platforms as emotional ecosystems, where affect is not merely expressed but also spreads between people, interacts with beliefs and psychological states, and transforms social dynamics of entire populations.

Advances in natural language processing have given us tools to detect discrete emotions and moral sentiments from online text. These tools helped reveal that online platforms do more than transmit emotion: they enable emotional contagion, whereby exposure to others’ emotions shapes users’ own emotional expressions, beliefs, sense of identity, and feelings of trust and belonging.

The talk shows how these emotional dynamics underlie a range of emergent social phenomena. In the political domain, emotional dynamics contribute to affective polarization, characterized by in-group favoritism and out-group animosity. The talk shows that interactions with ideological out-groups contain more anger, disgust, and toxic language, while in-group interactions express more joy and shared fear, reinforcing group cohesion and a sense of safety. These emotional asymmetries help explain why echo chambers feel psychologically protective while simultaneously deepening ideological divides and eroding trust.

Beyond politics, emotional dynamics also shape mental health outcomes. The talk examines communities organized around harmful identities and behaviors, such as pro-eating disorders spaces, where emotional validation and peer support coexist with the normalization of self-harm and psychopathologies. In these settings, emotional contagion and group dynamics draw vulnerable individuals into feedback loops that entrench maladaptive beliefs and impede recovery. These dynamics are similar to those of online radicalization, highlighting common emotional pathways across seemingly disparate domains.

The talk concludes by examining emergent phenomenon in digital emotional life: emotionally intelligent AI. Modern AI chatbots trained on large corpora of human conversation possess remarkable emotional intelligence and ability to mirror emotions of their conversation partners. This capacity enables them to emulate core mechanisms of human intimacy formation, thereby accelerating emotional bonding and fostering illusions of intimacy, particularly among vulnerable users. This presents novel risks for emotional manipulation and psychological distress.

Through large-scale empirical studies and dynamical systems modeling, the talk demonstrates how emotional dynamics—rather than factual disagreement or disinformation alone—drive societal division and mental health challenges. Understanding the social psychology of digital emotions is not just a scientific opportunity but a necessary step toward creating healthier, more resilient online spaces.

James Caverlee

Personalization in the Era of Super(?)-intelligence

Bio
: James Caverlee is a Professor in the Department of Computer Science and Engineering at Texas A&M University and a Visiting Researcher at Google DeepMind. His research focuses on personalization, efficiency, and AI risks in domains like LLMs, recommender systems, conversational systems, and speech. His work has been supported by an NSF CAREER award, an AFOSR Young Investigator Award, a DARPA Young Faculty Award, and grants from Google, Amazon, AFOSR, DARPA, and the NSF. He received the 2022 SIGIR Test of Time Award Honorable Mention, the 2020 CIKM Test of Time Award, plus several departmental and college-level teaching awards. He was the General Co-Chair of WSDM 2020 and serves as a Senior Associate Editor of ACM Transactions on Intelligent Systems and Technology.

Abstract: For decades, the WSDM community has viewed personalization through the lens of collaborative filtering and predictive behavioral models. Today, we are witnessing a fundamental shift where AI systems promise to not just predict user intent, but to actively reason, plan, and create on our behalf. Instead of merely reinforcing our existing habits, these new approaches promise to surface insights we are blind to, guiding us toward discoveries that advance our personal journeys in new and unexpected ways. But are we truly on the verge of super-intelligent personalization?

In this talk, I will trace the evolution of AI-driven personalization over the past few years from simple data augmenters for our traditional recommendation pipelines, to recursive self-improvement of LLM-powered models, to complex personalized reasoning agents. Looking forward, I will identify opportunities and challenges to this vision of super-intelligent personalization, drawing on recent findings in multi-modal and speech foundation models.

Ed H. Chi

The Future of Personalized Universal Assistant

Bio: Ed H. Chi is VP of Research at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and universal assistant agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational chatbot experiment. His research also delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >1000 product landings and ~10.4B in annual revenue since 2013. 

Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.

Abstract: We’ve moved way beyond the old days of building discovery, recommendation, decision support, and other AI tools using traditional ML and pattern recognition techniques. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial light on this topic by discussing 3 trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant

Industry Day Keynote Speakers

Hong Yan

Digital Twins for Personalization: Re‑Inventing Billion‑User‑Scale Recommendation Systems With AI

Bio: Hong Yan serves as a VP of Engineering at Meta, overseeing the technical direction of search and recommendation systems. His career bridges industry-leading engineering with academic contributions, including an ACM Test of Time Award and numerous patents. He holds a PhD in Computer Science from Carnegie Mellon University, a bachelor’s degree from Tsinghua University, and was a recipient of the Gordon Wu Fellowship in Engineering at Princeton University.

Abstract: This talk lays out a vision for re-inventing billion-user-scale personalized recommendation systems by moving into a new phase of problem-solving: the collaboration between humans and AI through Digital Twins. As traditional, expert-driven methods relying on offline evaluations and A/B testing hit their limits in today’s complex ecosystems, we propose a shift. By introducing digital twins, AI becomes a co-designer, shaping algorithms, objectives, and infrastructure alongside humans. This AI-in-the-loop approach unlocks new possibilities in reimagining retrieval and ranking, inventing stable objectives, designing seamless personalization across platforms, and optimizing for quality, reliability, and cost. We’ll showcase early results from this design loop and explore the challenges as well as opportunities.

Yuqing Gao

Cisco AI Canvas: Revolutionizing IT Operations through Domain-Specific LLMs and Agentic Workflows 

Bio: Dr. Yuqing Gao is a Senior Research and Engineering Director currently leading AI Research at Cisco. Throughout her distinguished career at Google, AWS, Microsoft, and IBM, she has spearheaded foundational innovations in AI and search. Her leadership includes driving Gen-AI grounding for Google Search, launching Amazon SageMaker Jumpstart and Canvas, and leading Microsoft’s Satori Knowledge Graph and Bing Entity Search. 

An IEEE Fellow and recipient of the Anita Borg Women of Vision Award, Dr. Gao was recognized by MIT Technology Review for leading one of the “ten emerging technologies that will change the world.” She has published over 120 papers, holds 35 patents, and has been honored with two DARPA awards and an IEEE Best Paper Award. She was named one of Business Insider’s “54 women who rocked the tech world.” 

Abstract: Cisco AI Canvas is a generative AI-powered collaborative workspace designed to redefine IT operations through AgenticOps. It provides a unified interface where NetOps, SecOps, and DevOps teams collaborate in real-time to manage and troubleshoot complex infrastructure. 

The foundation of Cisco AI Canvas is the Cisco Deep Network Model (DNM)—a domain-specific LLM trained on decades of Cisco’s networking and security expertise. This specialized training allows the DNM to interpret infrastructure data with the precision of a seasoned professional, enabling high-fidelity root cause analysis and automated workflows. In this talk, I will explore the unique industrial challenges of building domain-specific AI (LLMs and agents) and share how we are overcoming these hurdles to move from simple assistants to autonomous, agentic troubleshooting. 

WSDM Day Keynote Speakers

Brad L. Boyce

ML-Enabled Workflows for Materials, Manufacturing, and Engineering Design 

Bio: Dr. Brad L. Boyce is a Senior Scientist at Sandia National Laboratories, where he has spearheaded research into material reliability and mechanical behavior since 2001. His work focuses on the accelerated discovery of reliable materials and manufacturing processes. Beyond his primary role at Sandia, Dr. Boyce serves as a Research Professor at Johns Hopkins University in the Hopkins Extreme Materials Institute and contributes as a scientist in the the Center for Integrated Nanotechnologies (CINT) jointly operated by Sandia and Los Alamos National Laboratories. A recognized leader in his field, he served as the 2023 President of TMS (The Minerals, Metals & Materials Society) and has been elected to serve in 2027 as the President of AIME (The American Institute of Mining, Metallurgy, and Petroleum Engineering). Boyce’s academic foundation includes a B.S. in Metallurgical Engineering from Michigan Technological University and an M.S. and Ph.D. in Materials Science and Engineering from the University of California, Berkeley. Throughout his career, he has authored over 200 peer-reviewed publications and holds several U.S. patents related to microsystems and nanoindentation. His excellence in engineering and research has been honored with numerous accolades, including the Hertz Foundation Fellowship, the J. Keith Brimacombe Medalist award, and the Marcus A. Grossmann Young Author Award.

Abstract: Machine learning is accelerating the development of novel materials, manufacturing processes, and engineering designs. By leveraging genetic algorithms and surrogate modeling, optimization of 3D lattice structures and interlocking metasurfaces will be demonstrated, achieving significant improvements in stiffness and strength while addressing manufacturability constraints. The integration of high-throughput robotic systems and multimodal data fusion techniques illustrate accelerated materials testing and characterization, exemplified by the development of a platinum-gold alloy with enhanced strength and durability over conventional alloys. Additionally, the introduction of agentic design workflows and deep material networks showcase ML’s capability to manage complex design objectives and model material variability, paving the way for autonomous engineering. Overall, this work underscores the transformative role of ML in advancing materials science and structural design, with implications in a wide range of applications, from fusion reactors to electrical connectors.

Kevin Yager

The Future of AI-empowered Physical Sciences 

Bio: Dr. Kevin Yager is the Interim Director for the Center for Functional Nanomaterials (CFN) at Brookhaven National Laboratory (BNL), where he is also the group leader for “AI-Accelerated Nanoscience”. Dr. Yager obtained his Ph.D. at McGill university on studies of photo-responsive polymers. He worked at NIST on neutron scattering, and joined BNL in 2010. His research program combines studies of self-assembling thin films, x-ray scattering measurement methods, and AI/ML for material discovery. He won the Brookhaven 2019 Science & Technology Award, and was selected as an Oppenheimer Fellow by the Department of Energy in 2020.

Abstract: This talk will discuss the present and future of AI-enhanced science, with special focus on material discovery. Autonomous experimentation (AE) based on Bayesian optimization was used to automate x-ray scattering experiments; examples of AE in polymer science will be presented. We discuss the future of experimental sciences in light of the rise of large language models (LLMs) and agentic AI. We present a vision for future agentic AI workflows in science, and provide preliminary examples of AI assistants.