publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2026
- ICWSMClaim Verification with Adversarial Reasoning and PlanningKuan-Chieh Lo, Valerie Shalin, Kelly Garrett, and 1 more authorIn Proceedings of the International AAAI Conference on Web and Social Media, May 2026
The scale and speed of digital communication demands robust automated claim verification systems that handle complex, multi-hop reasoning. Existing approaches have critical limitations: single-agent systems exhibit confirmation bias, while conventional multi-agent frameworks rely on homogeneous agents that exhibit groupthink, limiting critical evaluation. We present CARP (Claim Verification with Adversarial Reasoning and Planning), a novel multi-agent claim verification framework that organizes heterogeneous agents powered by multiple different language models competing as support and refutation teams. This adversarial structure forces comprehensive evaluation from both perspectives while mitigating confirmation bias and groupthink. Our framework incorporates systematic claim decomposition, strategic verification planning, and multi-hop knowledge retrieval to handle complex reasoning tasks. We evaluate CARP on two claim verification datasets–HOVER and FEVEROUS–where it demonstrates significant improvements in verification accuracy compared to existing single-agent and homogeneous multi-agent approaches, particularly for complex claims requiring multi-hop reasoning and evidence synthesis. Ablation studies confirm that both adversarial evaluation model and multi-hop knowledge retrieval contribute substantially to performance, with benefits scaling with reasoning complexity.
@inproceedings{lo2026claim, title = {Claim Verification with Adversarial Reasoning and Planning}, author = {Lo, Kuan-Chieh and Shalin, Valerie and Garrett, Kelly and Parthasarathy, Srinivasan}, booktitle = {Proceedings of the International AAAI Conference on Web and Social Media}, year = {2026}, month = may, }
2025
- EAAMOFairWAG: Fairness-aware Weighted Aggregation for Graph Learning in a Federated SettingKuan-Chieh Lo, Yuntian He, Yuze Jiang, and 1 more authorIn Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 2025
Graph representation learning has revolutionized the modeling of complex relationships in networked data. While federated graph learning (FGL) enables privacy-preserving collaborative training across multiple clients, ensuring demographic fairness becomes challenging with heterogeneously distributed sensitive data. We identify that existing bias mitigation algorithms in federated learning struggle to maintain fairness when client data exhibits a high demographic skew. To address this, we introduce FairWAG (Fairness-aware Weighted Aggregation for Graphs). This novel federated framework preserves demographic fairness in graph representation learning across varying bias levels. Our approach leverages a core idea from cooperative game theory (Shapley Values) to quantify client models’ contributions to performance and fairness, enabling adaptive aggregation weights. Additionally, we measure model neuron sensitivities to class labels and sensitive attributes, allowing for fine-grained aggregation that further optimizes the performance-fairness trade-off. The experimental results demonstrate that our framework achieves superior performance-fairness trade-offs compared to existing algorithms across different scenarios.
@inproceedings{lo2025fairwag, title = {FairWAG: Fairness-aware Weighted Aggregation for Graph Learning in a Federated Setting}, author = {Lo, Kuan-Chieh and He, Yuntian and Jiang, Yuze and Parthasarathy, Srinivasan}, booktitle = {Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization}, pages = {119--150}, year = {2025}, } - ICDMCrisis Observatory: Extracting Credible Signals During a Crisis in the Age of LLMsKuan-Chieh Lo, Pranav Maneriker, Sriram Sai Ganesh, and 8 more authorsIn 2025 IEEE International Conference on Data Mining Workshops (ICDMW), 2025
Systems for crisis response have required several different models for the analysis of unstructured text, such as identifying needs, locations, topics, routing, and matching of needs with available responders. Large Language Models (LLMs) have replaced task-specific models across various language processing tasks. However, LLMs are known to be limited by their training data, collected before the crisis. In this demo, we explore the use of LLMs for crisis response scenarios with rapidly evolving information environments. We show how augmentation of these models with external reliable sources of crisis-specific information can help build adaptive systems for response.
@inproceedings{lo2025crisis, title = {Crisis Observatory: Extracting Credible Signals During a Crisis in the Age of LLMs}, author = {Lo, Kuan-Chieh and Maneriker, Pranav and Ganesh, Sriram Sai and Winecki, Dominik and Garrett, Kelly and Hyder, Ayaz and Nandi, Arnab and Shalin, Valerie and Bowen, Shannon and Sheth, Amit and others}, booktitle = {2025 IEEE International Conference on Data Mining Workshops (ICDMW)}, pages = {2602--2605}, year = {2025}, organization = {IEEE}, }
2022
- WWWVictor: An implicit approach to mitigate misinformation via continuous verification readingKuan-Chieh Lo, Shih-Chieh Dai, Aiping Xiong, and 2 more authorsIn Proceedings of the ACM Web Conference 2022, 2022
We design and evaluate VICTOR, an easy-to-apply module on top of a recommender system to mitigate misinformation. VICTOR takes an elegant, implicit approach to deliver fake-news verifications, such that readers of fake news can continuously access more verified news articles about fake-news events without explicit correction. We frame fake-news intervention within VICTOR as a graph-based question-answering (QA) task, with Q as a fake-news article and A as the corresponding verified articles. Specifically, VICTOR adopts reinforcement learning: it first considers fake-news readers’ preferences supported by underlying news recommender systems and then directs their reading sequence towards the verified news articles. To verify the performance of VICTOR, we collect and organize VERI, a new dataset consisting of real-news articles, user browsing logs, and fake-real news pairs for a large number of misinformation events. We evaluate zero-shot and few-shot VICTOR on VERI to simulate the never-exposed-ever and seen-before conditions of users while reading a piece of fake news. Results demonstrate that compared to baselines, VICTOR proactively delivers 6% more verified articles with a diversity increase of 7.5% to over 68% of at-risk users who have been exposed to fake news. Moreover, we conduct a field user study in which 165 participants evaluated fake news articles. Participants in the VICTOR condition show better exposure rates, proposal rates, and click rates on verified news articles than those in the other two conditions. Altogether, our work demonstrates the potentials of VICTOR, i.e., combat fake news by delivering verified information implicitly.
@inproceedings{lo2022victor, title = {Victor: An implicit approach to mitigate misinformation via continuous verification reading}, author = {Lo, Kuan-Chieh and Dai, Shih-Chieh and Xiong, Aiping and Jiang, Jing and Ku, Lun-Wei}, booktitle = {Proceedings of the ACM Web Conference 2022}, pages = {3511--3519}, year = {2022}, } - ACLLearning to rank visual stories from human ranking dataChi-Yang Hsu, Yun-Wei Chu, Vincent Chen, and 4 more authorsIn Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022
Visual storytelling (VIST) is a typical vision and language task that has seen extensive development in the natural language generation research domain. However, it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on VIST. In this paper, we present the VHED (VIST Human Evaluation Data) dataset, which first re-purposes human evaluation results for automatic evaluation; hence we develop Vrank (VIST Ranker), a novel reference-free VIST metric for story evaluation. We first show that the results from commonly adopted automatic metrics for text generation have little correlation with those obtained from human evaluation, which motivates us to directly utilize human evaluation results to learn the automatic evaluation model. In the experiments, we evaluate the generated texts to predict story ranks using our model as well as other reference-based and reference-free metrics. Results show that Vrank prediction is significantly more aligned to human evaluation than other metrics with almost 30% higher accuracy when ranking story pairs. Moreover, we demonstrate that only Vrank shows human-like behavior in its strong ability to find better stories when the quality gap between two stories is high. Finally, we show the superiority of Vrank by its generalizability to pure textual stories, and conclude that this reuse of human evaluation results puts Vrank in a strong position for continued future advances.
@inproceedings{hsu2022vrank, title = {Learning to rank visual stories from human ranking data}, author = {Hsu, Chi-Yang and Chu, Yun-Wei and Chen, Vincent and Lo, Kuan-Chieh and Chen, Chacha and Huang, Ting-Hao and Ku, Lun-Wei}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages = {6365--6378}, year = {2022}, }
2021
- WWWEscape from an echo chamberKuan-Chieh Lo, Shih-Chieh Dai, Aiping Xiong, and 2 more authorsIn Companion Proceedings of the Web Conference 2021, 2021
An echo chamber effect refers to the phenomena that online users revealed selective exposure and ideological segregation on political issues. Prior studies indicate the connection between the spread of misinformation and online echo chambers. In this paper, to help users escape from an echo chamber, we propose a novel news-analysis platform that provides a panoramic view of stances towards a particular event from different news media sources. Moreover, to help users better recognize the stances of news sources which published these news articles, we adopt a news stance classification model to categorize their stances into “agree”, “disagree”, “discuss”, or “unrelated” to a relevant claim for specified events with political stances. Finally, we proposed two ways showing the echo chamber effects: 1) visualizing the event and the associated pieces of news; and 2) visualizing the stance distribution of news from news sources of different political ideology. By making the echo chamber effect explicit, we expect online users will become exposed to more diverse perspectives toward a specific event.
@inproceedings{lo2021echo, title = {Escape from an echo chamber}, author = {Lo, Kuan-Chieh and Dai, Shih-Chieh and Xiong, Aiping and Jiang, Jing and Ku, Lun-Wei}, booktitle = {Companion Proceedings of the Web Conference 2021}, pages = {713--716}, year = {2021}, } - WSDMAll the wiser: Fake news intervention using user reading preferencesKuan-Chieh Lo, Shih-Chieh Dai, Aiping Xiong, and 2 more authorsIn Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 2021
To address the increasingly significant issue of fake news, we develop a news reading platform in which we propose an implicit approach to reduce people’s belief in fake news. Specifically, we leverage reinforcement learning to learn an intervention module on top of a recommender system (RS) such that the module is activated to replace RS to recommend news toward the verification once users touch the fake news. To examine the effect of the proposed method, we conduct a comprehensive evaluation with 89 human subjects and check the effective rate of change in belief but without their other limitations. Moreover, 84% participants indicate the proposed platform can help them defeat fake news.
@inproceedings{lo2021wiser, title = {All the wiser: Fake news intervention using user reading preferences}, author = {Lo, Kuan-Chieh and Dai, Shih-Chieh and Xiong, Aiping and Jiang, Jing and Ku, Lun-Wei}, booktitle = {Proceedings of the 14th ACM International Conference on Web Search and Data Mining}, pages = {1069--1072}, year = {2021}, }