Frontiers in Emerging Artificial Intelligence and Machine Learning

  1. Home
  2. Archives
  3. Vol. 2 No. 09 (2025): Volume02 Issue09 September
  4. Articles
Frontiers in Emerging Artificial Intelligence and Machine Learning

Article Details Page

Agentic Legal Intake: A Multi-Agent Framework For Hallucination-Free, Audit-Ready AI Screening In Mass-Tort Litigation

Authors

DOI:

https://doi.org/10.37547/feaiml/Volume02Issue09-02

Keywords:

LLM Automation, Legal Intake, Multi-Agent AI

Abstract

This study presents a multi-agent framework to address the risks of large language models (LLMs) in legal intake, particularly in mass tort litigation. The research focuses on mitigating a phenomenon known as hallucination, where LLMs generate plausible but false information. The study's objective is to evaluate if a system of peer-auditing agents, along with human involvement, can outperform a traditional single-agent model in terms of accuracy, data completeness, and audit efficiency. The methodology involved a mixed-methods design, using a multi-agent system with distinct Extractor, Validator, and Auditor agents, followed by human review. This system was tested on 100 anonymized mass tort intake cases, with 70% being real and 30% being synthetic. The quantitative metrics measured were hallucination rate, completeness score, and human review time. Qualitative analysis was also performed, based on feedback from six legal operations professionals. The multi-agent framework demonstrated a substantial reduction in the hallucination rate, from 21% in the single-agent baseline to just 5%, a 76% decrease. It also significantly improved data completeness, achieving a 92% score compared to 74% in the baseline, which is an 18 percentage point increase. Furthermore, the time required for human review of finalized cases dropped by 51%. Qualitative feedback from professionals highlighted increased trust and transparency in the agent-generated outputs due to the built-in audit trails. However, some noted issues with precision. In conclusion, the findings confirm that a structured multi-agent LLM framework is a highly effective way to improve the reliability and efficiency of legal intake workflows. By mimicking human peer-review processes, this agentic approach transforms AI into a transparent and accountable augmentation tool. This study emphasizes that agentic AI is accountable augmentation, paving the way for explainable and scalable legal AI systems.

References

Dahl, M., Magesh, V., Suzgun, M., & Ho, D. E. (2024). Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Journal of Legal Analysis, 16(1), 64–93.

Stanford Institute for Human-Centered Artificial Intelligence. (2023). Hallucinating Law: Legal Mistakes of Large Language Models Are Pervasive.

Reynolds, G. (2025). Short Circuit: In court, AI 'hallucinations' in legal filings & how to avoid making headlines. Reuters.

Zhang, L., & Ashley, K. D. (2025). Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation. In Proceedings of Workshop on Legally Compliant Intelligent Chatbots at ICAIL 2025. ACM, New York, NY, USA, 13 pages.

Xu, Z., Shi, S., Hu, B., Yu, J., Li, D., Zhang, M., & Wu, Y. (2023). Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration. ArXiv, abs/2311.08152.

Darwish, A. M., Rashed, E. A., & Khoriba, G. (2025). Mitigating LLM Hallucinations Using a Multi-Agent Framework. Information, 16(7), 517.

Yu, H. Q. & McQuade, F. (2025). RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration. CoRR, abs/2503.13514.

Tran, K-T., Dao, D., Nguyen, M-D., Pham, Q-V., O’Sullivan, B., & Nguyen, H. D. (2025). Multi-Agent Collaboration Mechanisms: A Survey of LLMs. arXiv preprint, 35 pages.

Haoran Wang, Zongxiao Yu, Baixiang Huang, and Kai Shu. (2025). Privacy-Aware Decoding: Mitigating Privacy Leakage of Large Language Models in Retrieval-Augmented Generation. arXiv preprint arXiv:2508.09098.

Rahul Hemrajani. (2025). Evaluating the Role of Large Language Models in Legal Practice in India. arXiv preprint arXiv:2508.09713

Downloads

Published

2025-09-13

How to Cite

Tejas Sarvankar, & Anna John. (2025). Agentic Legal Intake: A Multi-Agent Framework For Hallucination-Free, Audit-Ready AI Screening In Mass-Tort Litigation. Frontiers in Emerging Artificial Intelligence and Machine Learning, 2(09), 7–16. https://doi.org/10.37547/feaiml/Volume02Issue09-02