2025-2026

Area: Information Systems

Date/Time: September 12, 2025
Room: Bronfman 046

Guest Speaker: Hyeunjung (Elina) Hwang (Foster School of Business, University of Washington)

Topic: Killing not just weeds: Unexpected consequences of combating misinformation

Social platforms employ interventions to combat the rapid spread of misinformation. This study focuses on one such intervention employed by X* that aims to suppress misinformation by helping users find accurate information. In particular, this study seeks to provide a holistic view of the intervention鈥檚 effectiveness by investigating its impact on both true and false information diffusion. To this end, we utilize dual process theory to understand the potential effect and leverage the quasi-experiment setting and estimate the effect. Our results reveal that the intervention suppresses not only the spread of false news but also true information. To understand this unexpected finding, we further collect data using Amazon Mechanical Turk and find that true information is suppressed because people have difficulty discerning its truthfulness. We provide insights into the tweet characteristics that tend to mislead people鈥檚 perceptions.


Area: Operations Management

Date/Time: November 21, 2025, from 11:00 AM to 12:00 PM
Room: Bronfman 045

Guest Speaker: Heng Zhang, W. P. Carey Business School, Arizona State University

Topic: Large Language Models for Market Research: A Data-augmentation Approach

Large Language Models (LLMs) have transformed artificial intelligence by excelling in complex natural language processing tasks. Their ability to generate human-like text has opened new possibilities for market research, particularly in conjoint analysis, where understanding consumer preferences is essential but often resource-intensive. Traditional survey-based methods face limitations in scalability and cost, making LLM-generated data a promising alternative. However, while LLMs have the potential to simulate real consumer behavior, recent studies highlight a significant gap between LLM-generated and human data, with biases introduced when substituting between the two. In this paper, we address this gap by proposing a novel statistical data augmentation approach that efficiently integrates LLM-generated data with real data in conjoint analysis. This results in statistically robust estimators with consistent and asymptotically normal properties, in contrast to naive approaches that simply substitute human data with LLM-generated data, which can exacerbate bias. We further present a finite-sample performance bound on the estimation error. We validate our framework through an empirical study on COVID-19 vaccine preferences, demonstrating its superior ability to reduce estimation error and save data and costs by 24.9% to 79.8%. In contrast, naive approaches fail to save data due to the inherent biases in LLM-generated data compared to human data. Another empirical study on sports car choices validates the robustness of our results. Our findings suggest that while LLM-generated data is not a direct substitute for human responses, it can serve as a valuable complement when used within a robust statistical framework.


Laurent Picard Distinguished Lecturer Series

顿补迟别:听Friday, September 26, 2025
Time:聽10:30am -12:00pm
Location:聽Bronfman building, room 045

Guest Speaker: Lauren Rivera

Peter G. Peterson Professor of Corporate Ethics, Professor of Management & Organizations
Professor of Sociology, Weinberg College of Arts & Sciences (Courtesy)

Topic: Tainting or Telling: How the Meaning of Social Ties Varies Across Discipline

While previous research has analyzed how the presence or absence of social ties shapes labor market outcomes and inequalities, less is known about how employers interpret the value of social relationships in personnel decisions and how these meanings may vary by context. We examine these issues in the context of a high-stakes moment of stratification in academic careers: faculty tenure decisions. Drawing from an archival analysis of more than ten years of external tenure evaluations across four disciplines at two R1 universities, we analyze how evaluators describe their relationships with candidates and the meanings they attribute to various types of ties when evaluating tenure cases. We find distinct cross-disciplinary patterns, which were strongest in sociology and computer science. Sociologists view ties to candidates as tainting, corrupting the integrity of the evaluation process by including potentially biasing information unrelated to the quality of a person鈥檚 scholarship. Conversely, in computer science, ties were seen as telling, providing useful information about a candidate鈥檚 intellectual, social, and moral qualities that were seen as integral to evaluating the strength of a tenure case. Regardless of the actual strength of the tie, sociologists frequently engaged in a strategy of social distancing, in which they asserted their impartiality by downplaying their existing connections to a candidate, while computer scientists emphasized the closeness of their social ties with candidates as valuable affective and informational resources to be embraced in review. Interviews with faculty in both disciplines shed light on processes underlying these patterns. Overall, the study reveals that the use and value of social ties in personnel decisions are not universal but rather vary according to cultural norms embedded within different institutional contexts and the structure of work in particular settings.


Laurent Picard Distinguished Lecturer Series

Date/Time: TBD
Room: TBD

Guest Speaker: Karim Lakhani (Harvard Business School)

Topic: TBD

Back to top