Expert testimony and scientific evidence often are critical parts of litigation, both civil and criminal. Such testimony is usually problematic, however, for both judge and counsel. Scientific, mathematical, medical, and technological evidence, to name but a few areas, are complex and often outside the knowledge of the judge and counsel. Further, our adversarial system permits, if not compels, biased testimony intended to back up the claims of the party hiring the witness. Artificial Intelligence (AI) tools should be able to help lawyers and judges check at least some of the worst abuses presented by expert testimony.
Expert Testimony Validity
Artificial Intelligence (AI) can be a powerful tool for attorneys to analyze the credibility of expert witnesses in litigation. The value of expert testimony rests upon the expert’s credibility, and AI can be used to review an expert’s body of work for inconsistencies and inaccuracies.
Once uncovered, inconsistencies or discrepancies in an expert’s work and testimony can challenge the expert’s credibility. Given this, AI can be used by the party retaining the expert to identify potential inconsistencies that could be used to attack the expert’s credibility on cross-examination before those inconsistencies arise, giving the party time to prepare defenses.
Additionally, AI can be an especially valuable aid to lawyers, given the time it takes to review voluminous technical and scientific material often encompassing expert reports and testimony. In one case, a law firm in California utilized an AI tool called CoCounsel to analyze 75,000 pages of an expert’s 63 deposition transcripts. The lawyers used prompts for the AI software to identify specific claims the expert made about their fees and whether the expert ever offered certain opinions relevant to what the expert planned to testify to in the current case. The AI’s review of this expert’s transcripts was able to identify past statements by him that were inconsistent with his report in the current case and provide citations to the expert’s testimony in deposition transcripts from prior proceedings. The AI was also able to generate a report identifying any inconsistency in the expert’s commentary on a specified medical imaging study by analyzing testimony given in prior cases. The AI took 45 minutes to complete a task that would normally take lawyers days to complete.
The speed at which AI tools can analyze expert reports can be especially valuable for small firms that do not have as many attorneys on staff to review documents. Given the time intensiveness of reviewing an expert’s voluminous material, AI enables lawyers at smaller firms to spend more time on other matters for multiple clients and less time on document review. Another benefit of the time-saving nature of this technology for litigants is that it reduces the number of billable hours lawyers will charge to a client for reviewing documents.
Scientific Evidence Validity
Even the best and most competent expert is dependent upon the underlying science. The use of AI tools to verify expert claims in scientific and technical studies will likely increase as new AI models are developed. Researchers at IBM, Technische Universität Darmstadt, and Mohamed Bin Zayed University of Artificial Intelligence created a model known as MISSCI to verify whether the underlying evidence in scientific studies supports a conclusion made by the researchers. If the conclusion is not supported by the study or data cited, MISSCI can explain why the reasoning used in the study is false. For example, if the Expert’s Study X makes Claim Y based on data citing Study Z, MISSCI can verify whether the claims in Study Z actually support Claim Y, calling the credibility of Study X, and the expert, into question. While the model is still under development and is not intended to be a “standalone tool for fact checking,” its creators think it could be used to help humans verify scientific data proffered in studies more efficiently. Counsel faced with scientific claims could use MISSCI or similar tools to test the accuracy of the scientific “facts” underlying expert testimony and prior work.
AI as a Judicial “Aide”
AI tools that are able to analyze scientific studies could also be used to help courts determine whether expert scientific testimony should be admissible under Daubert v. Merrell Dow Pharmaceuticals.[1]
Under the Daubert standard, which is now incorporated in Federal Rule of Evidence 702, “the trial judge must ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.”[2] Daubert held that courts may consider, among other factors, whether a scientific technique or theory has been subject to peer review or publication in determining whether to admit testimony relying on it. The reasoning behind this is that peer review increases the probability that flaws in scientific techniques or methodologies would be identified by other qualified experts.[3] AI tools, such as MISSCI, could be used to help identify if there are any flaws in published works of claims made by an expert, even if the flaws were undetected during the peer review process. Indeed, despite the best efforts of reviewers, inconsistencies can occur in the peer review process.
Another Daubert factor courts must consider is the technique’s rate of error.[4] If the technique has a high rate of error, judges may prevent experts from basing their testimony on studies conducted using this technique. In the Daubert context, AI tools could also potentially be used by courts to scrutinize a scientific technique’s proffered rate of error and determine if the proffered rate of error is supported by the underlying scientific data.
Conclusion
AI can be a powerful tool to synthesize an expert’s body of work and help detect inconsistencies in an expert’s statements. With careful prompts, AI tools can save lawyers significant time in reviewing expert reports. Additionally, new AI models that can analyze the veracity of scientific data may emerge as useful resources for examining whether claims experts have made in publications or reports were supported by the underlying data. AI tools may also be used to resolve similar problems regarding the accuracy of underlying data supporting methodologies in Daubert challenges when parties are seeking to admit evidence and testimony based on the low error rate of certain scientific techniques and methods.
[1] 509 U.S. 579 (1993).
[2] Id. at 589.
[3] Id. at 593-594.
[4] Id. at 594.
About the Author
Ben Richmond is a 2L at William & Mary Law School. Ben’s research interests include the practical uses of AI in the legal profession, cybersecurity risks to critical infrastructure, and the intersection of technology, trade and national security. Prior to law school, Ben worked as a litigation analyst and paralegal for the U.S. Attorney’s Office for the Eastern District of New York’s National Security and Cybercrime section. In his free time, Ben enjoys playing intramural sports with his law school classmates. This piece represents the author’s views alone.