
Social presence is central to the enjoyment of live events, yet many fans watch sports alone. We investigate whether multi-agent conversational AI systems can recreate the dynamics of co-viewing and enhance immersion. We present CompanionCast, a prototype where multiple role-specialized AI agents (supportive, analytical, humorous) respond in real-time to sports events using caption streams, speech synthesis, and spatial audio. Distinctly, CompanionCast integrates an LLM-based evaluator agent that iteratively scores and refines conversations across five dimensions (relevance, authenticity, engagement, diversity, personality consistency). A pilot study with soccer fans suggests that multi-agent interaction improves perceived social presence compared to solo viewing, though delays and ASR issues limit fluidity. We contribute: (1) a framework for orchestrating multi-agent conversations around real-time multimodal streams, (2) a novel evaluator-agent pipeline for conversation quality control, and (3) exploratory evidence of increased social presence in AI-mediated co-viewing. We discuss challenges and future directions for generalizing this approach to broader event streaming and multimodal AI evaluation.
Sep 26, 2025

A framework and benchmark to evaluate LLMs multilingual capabilities in healthcare queries, revealing significant performance gaps across languages.
Dec 10, 2024

Peer review is fundamental to the integrity and advancement of scientific publication. Traditional methods of peer review analyses often rely on exploration and statistics of existing peer review data, which do not adequately address the multivariate nature of the process, account for the latent variables, and are further constrained by privacy concerns due to the sensitive nature of the data. We introduce AgentReview, the first large language model (LLM) based peer review simulation framework, which effectively disentangles the impacts of multiple latent factors and addresses the privacy issue. Our study reveals significant insights, including a notable 37.1% variation in paper decisions due to reviewers' biases, supported by sociological theories such as the social influence theory, altruism fatigue, and authority bias. We believe that this study could offer valuable insights to improve the design of peer review mechanisms. Our code is available at https://github.com/Ahren09/AgentReview.
Nov 12, 2024

Peer review is fundamental to the integrity and advancement of scientific publication. Traditional methods of peer review analyses often rely on exploration and statistics of existing peer review data...
Nov 12, 2024