Agentic Legal Intake: A Multi-Agent Framework For Hallucination-Free, Audit-Ready AI Screening In Mass-Tort Litigation
Abstract
This study presents a multi-agent framework to address the risks of large language models (LLMs) in legal intake, particularly in mass tort litigation. The research focuses on mitigating a phenomenon known as hallucination, where LLMs generate plausible but false information. The study's objective is to evaluate if a system of peer-auditing agents, along with human involvement, can outperform a traditional single-agent model in terms of accuracy, data completeness, and audit efficiency. The methodology involved a mixed-methods design, using a multi-agent system with distinct Extractor, Validator, and Auditor agents, followed by human review. This system was tested on 100 anonymized mass tort intake cases, with 70% being real and 30% being synthetic. The quantitative metrics measured were hallucination rate, completeness score, and human review time. Qualitative analysis was also performed, based on feedback from six legal operations professionals. The multi-agent framework demonstrated a substantial reduction in the hallucination rate, from 21% in the single-agent baseline to just 5%, a 76% decrease. It also significantly improved data completeness, achieving a 92% score compared to 74% in the baseline, which is an 18 percentage point increase. Furthermore, the time required for human review of finalized cases dropped by 51%. Qualitative feedback from professionals highlighted increased trust and transparency in the agent-generated outputs due to the built-in audit trails. However, some noted issues with precision. In conclusion, the findings confirm that a structured multi-agent LLM framework is a highly effective way to improve the reliability and efficiency of legal intake workflows. By mimicking human peer-review processes, this agentic approach transforms AI into a transparent and accountable augmentation tool. This study emphasizes that agentic AI is accountable augmentation, paving the way for explainable and scalable legal AI systems.