AI Newsletter

Trust but Verif(AI): The Use of AI as a Tool to Analyze Credibility of Expert Witnesses and their Materials – by Ben Richmond

Expert Testimony Validity

Artificial Intelligence (AI) can be a powerful tool for attorneys to analyze the credibility of expert witnesses in litigation. The value of expert testimony rests upon the expert’s credibility, and AI can be used to review an expert’s body of work for inconsistencies and inaccuracies.

Once uncovered, inconsistencies or discrepancies in an expert’s work and testimony can challenge the expert’s credibility. Given this, AI can be used by the party retaining the expert to identify potential inconsistencies that could be used to attack the expert’s credibility on cross-examination before those inconsistencies arise, giving the party time to prepare defenses.

Additionally, AI can be an especially valuable aid to lawyers, given the time it takes to review voluminous technical and scientific material often encompassing expert reports and testimony. In one case, a law firm in California utilized an AI tool called CoCounsel to analyze 75,000 pages of an expert’s 63 deposition transcripts. The lawyers used prompts for the AI software to identify specific claims the expert made about their fees and whether the expert ever offered certain opinions relevant to what the expert planned to testify to in the current case. The AI’s review of this expert’s transcripts was able to identify past statements by him that were inconsistent with his report in the current case and provide citations to the expert’s testimony in deposition transcripts from prior proceedings. The AI was also able to generate a report identifying any inconsistency in the expert’s commentary on a specified medical imaging study by analyzing testimony given in prior cases. The AI took 45 minutes to complete a task that would normally take lawyers days to complete.

The speed at which AI tools can analyze expert reports can be especially valuable for small firms that do not have as many attorneys on staff to review documents. Given the time intensiveness of reviewing an expert’s voluminous material, AI enables lawyers at smaller firms to spend more time on other matters for multiple clients and less time on document review. Another benefit of the time-saving nature of this technology for litigants is that it reduces the number of billable hours lawyers will charge to a client for reviewing documents.

Scientific Evidence Validity

Even the best and most competent expert is dependent upon the underlying science. The use of AI tools to verify expert claims in scientific and technical studies will likely increase as new AI models are developed. Researchers at IBM, Technische Universität Darmstadt, and Mohamed Bin Zayed University of Artificial Intelligence created a model known as MISSCI to verify whether the underlying evidence in scientific studies supports a conclusion made by the researchers. If the conclusion is not supported by the study or data cited, MISSCI can explain why the reasoning used in the study is false. For example, if the Expert’s Study X makes Claim Y based on data citing Study Z, MISSCI can verify whether the claims in Study Z actually support Claim Y, calling the credibility of Study X, and the expert, into question. While the model is still under development and is not intended to be a “standalone tool for fact checking,” its creators think it could be used to help humans verify scientific data proffered in studies more efficiently. Counsel faced with scientific claims could use MISSCI or similar tools to test the accuracy of the scientific “facts” underlying expert testimony and prior work.

AI as a Judicial “Aide”

 AI tools that are able to analyze scientific studies could also be used to help courts determine whether expert scientific testimony should be admissible under Daubert v. Merrell Dow Pharmaceuticals.[1]

Under the Daubert standard, which is now incorporated in Federal Rule of Evidence 702, “the trial judge must ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.”[2] Daubert held that courts may consider, among other factors, whether a scientific technique or theory has been subject to peer review or publication in determining whether to admit testimony relying on it. The reasoning behind this is that peer review increases the probability that flaws in scientific techniques or methodologies would be identified by other qualified experts.[3] AI tools, such as MISSCI, could be used to help identify if there are any flaws in published works of claims made by an expert, even if the flaws were undetected during the peer review process. Indeed, despite the best efforts of reviewers, inconsistencies can occur in the peer review process.

Another Daubert factor courts must consider is the technique’s rate of error.[4] If the technique has a high rate of error, judges may prevent experts from basing their testimony on studies conducted using this technique. In the Daubert context, AI tools could also potentially be used by courts to scrutinize a scientific technique’s proffered rate of error and determine if the proffered rate of error is supported by the underlying scientific data.

Conclusion

AI can be a powerful tool to synthesize an expert’s body of work and help detect inconsistencies in an expert’s statements. With careful prompts, AI tools can save lawyers significant time in reviewing expert reports. Additionally, new AI models that can analyze the veracity of scientific data may emerge as useful resources for examining whether claims experts have made in publications or reports were supported by the underlying data. AI tools may also be used to resolve similar problems regarding the accuracy of underlying data supporting methodologies in Daubert challenges when parties are seeking to admit evidence and testimony based on the low error rate of certain scientific techniques and methods.


[1] 509 U.S. 579 (1993).

[2] Id. at 589.

[3] Id. at 593-594.

[4] Id. at 594.


About the Author

Ben Richmond is a 2L at William & Mary Law School. Ben’s research interests include the practical uses of AI in the legal profession, cybersecurity risks to critical infrastructure, and the intersection of technology, trade and national security. Prior to law school, Ben worked as a litigation analyst and paralegal for the U.S. Attorney’s Office for the Eastern District of New York’s National Security and Cybercrime section. In his free time, Ben enjoys playing intramural sports with his law school classmates. This piece represents the author’s views alone.

Robot Pointing on a Wall
AI Newsletter

IP Concerns with ChatGPT

By Mike Papakonstantinou

Copyright

Overview

ChatGPT implicates various IP copyright issues, including in the areas of infringement and ownership. As noted in previous articles, OpenAI used voluminous datasets to develop its Large Language Model. Some sources include Wikipedia, various digitized books, and sources across the web. However, critics allege that the input data used to train ChatGPT-like systems may constitute copyright infringement at a very large scale because it was used without the permission of the copyright holders. In addition, even though Wikipedia already provides licensing options to utilize its copyrighted text, the licenses typically require some form of attribution or linking. As of this article’s publication, it appears that ChatGPT does not cite Wikipedia or other sources regularly or accurately.

In the United States, copyright laws grant the owner of a copyright the exclusive right to reproduce or distribute their work, to prepare a derivative work, and other associated rights. However, without large amounts of quality training data, these AI systems simply perform poorly or are ineffective.

Under the fair use doctrine, any usage of copyrighted works to develop or enhance ChatGPT may be acceptable, because the fair use doctrine permits non-licensed uses of copyrighted works in some scenarios. A common example of fair use is in an educational context, such as when a professor distributes a relevant article to students for the purposes of teaching or facilitating scholarship. While courts consider various factors to determine fair use, the inquiry ultimately may turn on how “transformative” the use is. Transformative uses add something new, with a different character or purpose, and do not substitute for the original use of the work. Consequently, even though OpenAI commercialized ChatGPT and it no longer remains a research tool, it may still qualify as a fair use. The copying of data for AI training is non-expressive, making the resulting output highly transformative of the original works. The purpose of generative AI (synthesizing new information) is highly different from the original purpose of the original works (expressing an idea of the work’s author).

Analysis

Generative AI systems, which can create novel content, may infringe the rights granted to a copyright owner in various ways. For example, when a company like OpenAI creates a database with training data, it is likely making a copy of the work for inclusion in the database, which may violate the right of reproduction. In addition, if ChatGPT creates a new work based on its training data and a user’s prompt, the resulting new, generated work may be a derivative work. In this case, there may still be copyright infringement because a copyright owner of the original work (from the training data in this example) has the exclusive right to a derivative work, subject to any fair use defense claims.

Furthermore, if ChatGPT outputs a non-de minimis amount of copyrighted text, such as from a textbook or novel, in response to a user’s prompt, then that may constitute distribution. Even if the system quoted text and properly cited the copyrighted work (“attribution”), attribution alone cannot protect an infringer from copyright infringement liability. It should be noted that there seems to be some safeguards built into ChatGPT to prevent the widespread dissemination of copyrighted materials. For example, at least one outlet reported that ChatGPT rejected direct prompts to output copyrighted works.

Moreover, works generated by ChatGPT may present novel legal questions of authorship and ownership. To receive copyright protection (and thus create an ownership right in the copyright), the work must meet the authorship requirement. Based on current U.S. Copyright Office policy and case law, the Office likely will reject an application for copyright if a machine or algorithm is listed as the author. However, the question of what copyright protection exists for a work created by AI with human involvement is a different legal issue. This inquiry may turn on the involvement of the human in the authorship of the work. Margaret Esquenet, a partner at IP boutique law firm Finnegan, opined that AI-generated works might either constitute a work in the public domain or a derivative work of materials in the training data.

In addition, the U.S. Copyright Office has specified its intentions to focus on legal uncertainties involving technology and copyright in 2023 in light of rapid technological developments. Recently, the Office published a notice indicating that AI-assisted works are eligible for copyright protection if there is sufficient human authorship. One example provided by the Office is when a human arranges or selects AI-generated content in a sufficient manner that the overall work meets the authorship requirement. In February 2023, the Office also determined that individual AI-generated illustrations and images utilized by a human author in a novel did not receive copyright protection, though the text (which was entirely human-written) and the overall work did receive copyright. Although the Office intended to clarify these murky legal questions with its recent actions, there is still confusion regarding the boundaries of copyright eligibility for AI-assisted works. Attorneys predict that courts will still need to make determinations in individual cases and provide more clarity on the Office’s guidance.

Even though OpenAI’s terms of service indicate that the requestor/end-user ultimately receives “right, title and interest” to the resultant work, it still may not meet the standards for authorship to receive copyright protection. ChatGPT end-users should understand that the system’s outputs may not result in works that are eligible currently for copyright protection under existing laws and policies. This may change as the Office and courts provide more clarity within the generative AI domain. More guidance is necessary to protect human-made creations and to inform how to protect AI-assisted creations as well.

Patent

Overview

ChatGPT implicates issues in patent law as well. Given the huge training data set, algorithmic sophistication, and vast computing resources, it is possible for ChatGPT to create inventions. However, in 2022, the Federal Circuit held that AI could not be the inventor of a patent, as an inventor must be a natural person. This ruling notably contrasts with South Africa’s patent office decision, which granted a patent to the AI system for the same invention.

Analysis

The patent at issue in both the American litigation and the South African patent application listed AI as the sole inventor, but AI-augmented inventions generated by ChatGPT present different legal issues. For example, it is unclear how central natural persons were to the underlying invention when humans use generative AI systems. If ChatGPT does most of the inventive work, then the person who prompted the inventive output “may not be able to take the oath required by the patent office that they are the rightful inventors,” as noted here. If natural persons did not contribute to the invention, then ChatGPT likely alone created the resultant invention, which, under last year’s Federal Circuit decision, bars patentability. The United States Patent and Trademark Office (USPTO) has solicited input from stakeholders on the issue of AI and inventorship. Although the USPTO cannot overrule the Federal Circuit’s holding, it can suggest any potential changes to federal copyright and IP laws. Mark Lemley, a Stanford Law professor and Lex Luminia’s counsel, opined that Congress must amend the statute because “AI is engaging in significant inventive activity…the PTO and the courts have to pretend that activity was done by a human, or conclude that the invention isn’t patentable at all because it was done by an AI.”

In addition, one requirement for patentability is non-obviousness based on a “person having ordinary skill in the art” (PHOSITA), as noted here. A person using ChatGPT to invent increases likely the knowledge and skill level of a PHOSITA due to the system’s vast training data across numerous technical fields and computing resources, essentially merging various “arts” into one pool of data. Increases in what knowledge and skill a PHOSITA has raised the threshold for what is non-obvious, potentially inflating the barrier to patentability. Patent law, consequently, must clarify what non-obviousness entails in the context of AI-augmented inventions. Without such clarity, individuals and companies will operate in uncertainty as they pursue patent protection for AI-assisted creations.

Trademark

Overview

Unlike copyright and patent law, it is “legally irrelevant” who or what creates a trademark. Consequently, tools like ChatGPT can help individuals and companies in generating marks eligible for federal trademark protection. Also, practitioners already utilize AI tools to augment their trademark practices, such as using automated systems and AI-assisted methods for client counseling. Some AI tools, such as Corsearch, check the USPTO trademark registry to spot existing trademarks that can pose a bar to registration for a new product name or brand. Such tools allow for quick processing to facilitate review by a practitioner, who ultimately provides legal counsel to clients. Corsearch uses various factors, such as the risk of similarity between desired trademarks and registered trademarks. Other existing AI tools help trademark owners automatically comb the web to flag potentially infringing products or counterfeit goods to enforce trademark rights. 

Analysis

ChatGPT can help attorneys by generating ideas for trademarks. For example, Ashley G. Kessler, a trademark attorney at Cozen O’Connor, recently described using ChatGPT to identify brand names for a client. Although Generative AI is helpful, the technology is not a complete replacement for attorneys in this context. For example, of the ten names generated by ChatGPT, two conflicted with existing registrations at the USPTO.

Written for the Fall 2023 AI Newsletter

AI Newsletter

From Principles to Principals–Establishing Safe Harbors & Regulating the Ethical Development of Artificial Intelligence Systems

By Jeremy Bloomstone

In the market for AI analytics, systems connect actors from vastly different sectors of the global economy.  Governments seek to leverage AI-powered insights in their administrative and governance functions. The relationship between innovative, profit-driven vendors and demand and competition-driven customers will soon strain traditional concepts of contractual and product liability. Ethical responsibility as bespoke development and mass deployment of AI resources will also continue to accelerate.  Regulation in this space should address two fundamental questions: who bears responsibility when AI systems become problematic in decisions, applications, and operations; and when and where should liability attach in the AI lifecycle?

During the 2022 Problematic AI Symposium at William & Mary Law School, Dennis Hirsch, Professor of Law and Director of the Program on Data and Governance at the Moritz College of Law at The Ohio State University, suggests industry executives view responsible management and the ethics of AI through the lens of corporate sustainability, as opposed to rote compliance. This recognition of responsibility influences corporate decision-making and shapes companies’ AI development processes to reduce regulatory risk, build and sustain trust, retain employees, improve quality and competitiveness, and demonstrate company values. However, studies have shown that looming regulations on AI also affect corporate decision-making in ways that reduce the risk tolerance of managers and the internal priority for ethical product development and the adoption of AI. Hirsch’s analysis of the developing trends in law and policy, as well as management strategies for responsible AI management, lead to a forward-looking conclusion: future regulatory proposals should prioritize shielding the incentives and impulses to innovate while advancing procedural mechanisms for holding developers accountable to the principles they proclaim publicly and avoiding burdensome and ineffective obligations.

As pressures mount for sustained innovation in AI, legislators and regulators in the US and EU might turn to check-the-box style compliance measures, which fail to reflect the active and innovative governance many companies already leverage in the design, development, and deployment of AI technologies. Creating safe harbors from liability for companies who commit to a reporting, disclosure, and monitoring scheme would move the needle beyond self-regulation. Such safe harbors also recognize and capitalize on the investment and leadership of AI developers in committing to ethical practices. Functionally, this approach would instill accountability for adhering to ethical principles. Organizations are already publicizing while also leveraging audits and impact assessments at various stages of the AI development lifecycle. Organizations would be forced to share liability for all contracting parties and stakeholders involved in designing, developing, and monitoring any AI solution or system brought to market.

Fundamentally, a safe harbor approach ensures the process of protecting principles and would allow organizations some flexibility in how they structure their oversight. Industry leaders like Cisco, Hewlett Packard Enterprise, IBM, Microsoft, and Google have already set up internal procedural infrastructure with review boards, processes for identifying and escalating uniquely risky projects, and evaluating in real-time potential and system failures. Recognizing and distinguishing structural harms and acute personal injuries from AI decision-making is critical to this scheme because these harms and injuries require diverging regulatory approaches. But, ensuring regulators, key stakeholders, and the public have access to information throughout the AI design, development, and deployment lifecycle can help not only curtail potential abuses arising out of AI deployment but also inform awareness of market participation and the potential need for sector-specific regulation.

The discussion throughout CLCT’s Problematic AI Symposium highlighted the competing perspectives, public concerns, and geopolitical pressures to calibrate legislation and regulation in this evolving space. A first step should be formalizing accountability, responsibility, and liability for corporate best practices and incentivizing their wider adoption by other organizations seeking to develop ethical AI technologies or responsibly integrate AI components into their business operations.

Written for the Fall 2023 AI Newsletter