Kristina Lerman
Cyberpsychology: Emotion as the Hidden Driver of Social Behavior in Online Networks
Bio: Kristina Lerman is a Professor of Informatics at Indiana University’s Luddy School of Informatics, Computing and Engineering and a fellow of the AAAI. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her work on modeling and understanding human behavior online social networks has been covered by the Washington Post, Wall Street Journal, and The Atlantic.
Abstract: Emotions shape every aspect of social life, yet their role in digital communication has been under-explored. Research on social media has largely focused on how people share and consume information online, while comparatively less attention has been paid to how emotions organize attention, shape identity, and drive collective behavior. This talk advocates the idea of social media platforms as emotional ecosystems, where affect is not merely expressed but also spreads between people, interacts with beliefs and psychological states, and transforms social dynamics of entire populations.
Advances in natural language processing have given us tools to detect discrete emotions and moral sentiments from online text. These tools helped reveal that online platforms do more than transmit emotion: they enable emotional contagion, whereby exposure to others’ emotions shapes users’ own emotional expressions, beliefs, sense of identity, and feelings of trust and belonging.
The talk shows how these emotional dynamics underlie a range of emergent social phenomena. In the political domain, emotional dynamics contribute to affective polarization, characterized by in-group favoritism and out-group animosity. The talk shows that interactions with ideological out-groups contain more anger, disgust, and toxic language, while in-group interactions express more joy and shared fear, reinforcing group cohesion and a sense of safety. These emotional asymmetries help explain why echo chambers feel psychologically protective while simultaneously deepening ideological divides and eroding trust.
Beyond politics, emotional dynamics also shape mental health outcomes. The talk examines communities organized around harmful identities and behaviors, such as pro-eating disorders spaces, where emotional validation and peer support coexist with the normalization of self-harm and psychopathologies. In these settings, emotional contagion and group dynamics draw vulnerable individuals into feedback loops that entrench maladaptive beliefs and impede recovery. These dynamics are similar to those of online radicalization, highlighting common emotional pathways across seemingly disparate domains.
The talk concludes by examining emergent phenomenon in digital emotional life: emotionally intelligent AI. Modern AI chatbots trained on large corpora of human conversation possess remarkable emotional intelligence and ability to mirror emotions of their conversation partners. This capacity enables them to emulate core mechanisms of human intimacy formation, thereby accelerating emotional bonding and fostering illusions of intimacy, particularly among vulnerable users. This presents novel risks for emotional manipulation and psychological distress.
Through large-scale empirical studies and dynamical systems modeling, the talk demonstrates how emotional dynamics—rather than factual disagreement or disinformation alone—drive societal division and mental health challenges. Understanding the social psychology of digital emotions is not just a scientific opportunity but a necessary step toward creating healthier, more resilient online spaces.
James Caverlee
Personalization in the Era of Super(?)-intelligence
Bio: James Caverlee is a Professor in the Department of Computer Science and Engineering at Texas A&M University and a Visiting Researcher at Google DeepMind.
His research focuses on personalization, efficiency, and AI risks in domains like LLMs, recommender systems, conversational systems, and speech. His work has been supported by an NSF CAREER award, an AFOSR Young Investigator Award,
a DARPA Young Faculty Award, and grants from Google, Amazon, AFOSR, DARPA, and the NSF. He received the 2022 SIGIR Test of Time Award Honorable Mention, the 2020 CIKM Test of Time Award, plus several departmental and college-level
teaching awards. He was the General Co-Chair of WSDM 2020 and serves as a Senior Associate Editor of ACM Transactions on Intelligent Systems and Technology.
Abstract: For decades, the WSDM community has viewed personalization through the lens of collaborative filtering and predictive behavioral models. Today, we are witnessing a fundamental shift where AI systems promise to not just predict user intent, but to actively reason, plan, and create on our behalf. Instead of merely reinforcing our existing habits, these new approaches promise to surface insights we are blind to, guiding us toward discoveries that advance our personal journeys in new and unexpected ways. But are we truly on the verge of super-intelligent personalization?
In this talk, I will trace the evolution of AI-driven personalization over the past few years from simple data augmenters for our traditional recommendation pipelines, to recursive self-improvement of LLM-powered models, to complex personalized reasoning agents. Looking forward, I will identify opportunities and challenges to this vision of super-intelligent personalization, drawing on recent findings in multi-modal and speech foundation models.
Ed H. Chi
The Future of Personalized Universal Assistant
Bio: Ed H. Chi is VP of Research at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and universal assistant agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational chatbot experiment. His research also delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >1000 product landings and ~10.4B in annual revenue since 2013.
Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.
Abstract: We’ve moved way beyond the old days of building discovery, recommendation, decision support, and other AI tools using traditional ML and pattern recognition techniques. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial light on this topic by discussing 3 trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant
Industry Day Keynote Speakers
Hong Yan
Digital Twins for Personalization: Re‑Inventing Billion‑User‑Scale Recommendation Systems With AI
Bio: Hong Yan serves as a VP of Engineering at Meta, overseeing the technical direction of search and recommendation systems. His career bridges industry-leading engineering with academic contributions, including an ACM Test of Time Award and numerous patents. He holds a PhD in Computer Science from Carnegie Mellon University, a bachelor’s degree from Tsinghua University, and was a recipient of the Gordon Wu Fellowship in Engineering at Princeton University.
Abstract: This talk lays out a vision for re-inventing billion-user-scale personalized recommendation systems by moving into a new phase of problem-solving: the collaboration between humans and AI through Digital Twins. As traditional, expert-driven methods relying on offline evaluations and A/B testing hit their limits in today’s complex ecosystems, we propose a shift. By introducing digital twins, AI becomes a co-designer, shaping algorithms, objectives, and infrastructure alongside humans. This AI-in-the-loop approach unlocks new possibilities in reimagining retrieval and ranking, inventing stable objectives, designing seamless personalization across platforms, and optimizing for quality, reliability, and cost. We’ll showcase early results from this design loop and explore the challenges as well as opportunities.