Working Papers

Polarization under Biased Argument Sharing. Job Market Paper.

Abstract: This paper explores how self-censorship and biased argument sharing drive ideological polarization, supplanting homophily as the primary or necessary factor. I develop a formal model to explain why online polarization appears more extreme than offline and why interventions targeting echo chambers have largely failed. Contrary to conventional beliefs, I argue that anonymity is not the main cause of extreme behavior online; instead, both in anonymous and public settings, individuals self-censor to maintain a consistent ideological image, contributing to polarized discourse. Simulations using LLM-based agents show that self-censored argument sharing is consistently present in LLM agents, with relatively low levels of homophily, supporting the assumptions of our model. Results are particularly robust on large networks, making the model most suitable to exploring polarization in online settings.

Information Aggregation in Presence of Media on a Network with Experts

Abstract: In this paper, I explore information aggregation in social networks amidst growing competition between traditional media and social platforms. I model a network of truth-seeking agents who receive information from both their neighbors and a potentially biased media source. Knowledgeable agents, or “experts,” act as anchors for accurate information within this network. Key questions include how the network’s structure and the placement of experts influence the effectiveness of countering biased narratives. My results reveal that an agent’s influence is tied to their Katz-Bonacich centrality, with simulations indicating that merely increasing the visibility of knowledgeable agents, without reinforcing their credibility, may backfire, leading to skepticism. This study underscores that, to mitigate bias, policy and platform efforts should prioritize building the credibility and reputation of accurate sources - through measures like external validation mechanisms - over extending their reach.

Motivated Reasoning is Key to Fact-checking Behavior, and Money is Not, with Dongfang Gaozhao and Pengfei Zhang. Under review.

Abstract: This paper investigates the cause and consequence of fact-checking. In an online experiment, we asked subjects to evaluate news veracity and varied two experimental conditions: (1) the opportunity to receive fact-checking results and (2) bonus payment for accuracy. We test three competing theories for fact-checking behavior: value of information (VoI), limited attention (LA), and motivated reasoning (MR). We find that monetary incentives do not promote fact-checking. Prior awareness of the news and perceived easiness in determining news authenticity significantly reduce fact-checking. Democrats are more likely to fact-check on the news aligning with Republicans' ideology, suggesting a tendency to seek information when there is a need to defend one's pre-existing belief. Overall, our results contradict VoI, show mixed evidence for LA, and support MR. When available, fact-checking consistently improves subjects' accuracy in evaluating news veracity by over 40\%.

The Good, the Bad, and the Hulk-like GPT: Analyzing Emotional Decisions of Large Language Models in Cooperation and Bargaining Games, with Mikhail Mozikov, Nikita Severin, Maria Glushanina, Mikhail Baklashkin, Andrey V. Savchenko, Ilya Makarov.

Abstract: Behavior study experiments are an important part of society modeling and understanding human interactions. In practice, many behavioral experiments encounter challenges related to internal and external validity, reproducibility, and social bias due to the complexity of social interactions and cooperation in human user studies. Recent advances in Large Language Models (LLMs) have provided researchers with a new promising tool for the simulation of human behavior. However, existing LLM-based simulations operate under the unproven hypothesis that LLM agents behave similarly to humans as well as ignore a crucial factor in human decision-making: emotions. In this paper, we introduce a novel methodology and the framework to study both, the decision-making of LLMs and their alignment with human behavior under emotional states. Experiments with GPT-3.5 and GPT-4 on four games from two different classes of behavioral game theory showed that emotions profoundly impact the performance of LLMs, leading to the development of more optimal strategies. While there is a strong alignment between the behavioral responses of GPT-3.5 and human participants, particularly evident in bargaining games, GPT-4 exhibits consistent behavior, ignoring induced emotions for rationality decisions. Surprisingly, emotional prompting, particularly with `anger' emotion, can disrupt the "superhuman" alignment of GPT-4, resembling human emotional responses.

Sovereign Rating Changes and FDI to Emerging Markets: Fear over Greed, with Kaushik Basu and Supriyo De.

Abstract: This paper explores how Sovereign Credit Rating changes influence Foreign Direct Investment (FDI) inflows in emerging and developing economies during the post-crisis period. In light of diminished trust in credit ratings after the 2008 financial crisis, markets have become more skeptical of these ratings as indicators of economic stability. By examining both absolute and relative rating changes, this study aims to understand how the market response to credit rating depends on timing and direction of change. The results show that upward rating changes, while not affecting FDI inflows immediately, have a positive impact in the following period, suggesting market hesitancy - investors may wait to see if the improvement holds before committing. On the other hand, downward rating changes cause an immediate decline in FDI, with no significant lagged effect, suggesting sharp but short-term market reaction. Relative Rating (RR) shifts on average have nearly double the impact of absolute shifts, underscoring the importance of comparative ratings over standalone assessments for investors.

Publications

EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas, with Mikhail Mozikov, Nikita Severin, Maria Glushanina, Mikhail Baklashkin, Ivan Nasonov, Daniil Orekhov, Ivan Makovetskiy, Vasily Lavrentyev, Vladislav Pekhotin, Akim Tsvigun, Denis Turdakov, Tatiana Shavrina, Andrey V. Savchenko, Ilya Makarov. In: The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024, forthcoming).

Abstract: One of the urgent tasks of artificial intelligence is to assess the safety and alignment of large language models (LLMs) with human behavior. Conventional verification in natural language processing problems only can be insufficient. Since human decisions are typically influenced by emotions, this paper studies the LLMs' alignment in complex strategic and ethical environments with an in-depth analysis of the drawbacks of our psychology and emotional impact on decision-making. We introduce the novel EAI framework for integrating emotion modeling into LLMs to examine the emotional impact on ethics and LLM-based decision-making in a wide range of strategic games, including bargaining and repeated games. Our experimental study with various LLMs demonstrated that emotions can significantly alter the ethical decision-making landscape of LLMs, highlighting the need for robust mechanisms to ensure consistent ethical standards. The game-theoretic assessment showed that proprietary LLMs are prone to emotion biases that increase with decreasing model size or working with non-English languages. Moreover, adding emotions lets the LLMs increase the cooperation rate during the game.

Logic of Existentialism in Fiction, with Ilya Makarov. In: Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference (FLAIRS), 2017.

Abstract: We have considered core approaches to the problem of fictional objects. For each model authors covered the problem whether everything fictional exists or not in terms of evaluation, separating groups of objects, quantifying or existing in modal worlds. The article contains brief overview of the approaches for dealing with fictional objects and evaluating statements containing fictional objects as their part.

Adapting First-Person Shooter Video Game for Playing with Virtual Reality Headsets, with Ilya Makarov, Oleg Konoplia, Pavel Polyakov, Maxim Martynov, Peter Zyuzin, Olga Gerasimova. In: Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference (FLAIRS), 2017.

Abstract: In this article a combination of two modern aspects of games development is considered: (i) the impact of high quality graphics and virtual reality (VR) user adaptation to believe in realness of in-game events by user’s own eyes; (ii) modeling an enemy’s behavior under automatic computer control, called BOT, which reacts similarly to human players. We consider a First-Person Shooter (FPS) game genre, which simulates an experience of combat actions. We describe some tricks to overcome simulator sicknesses in a shooter with respect to Oculus Rift and HTC Vive headsets. We created a BOT model that strongly reduces the conflict and uncertainty in matching human expectations. BOT passes VR game Alan Turing test with 80% threshold of believable human-like behavior.