By Jeremy Bloomstone
Across the public and private sectors, stakeholders have increasingly recognized and coalesced around foundational principles that should guide the creation and adoption of Artificial Intelligence (AI) systems. While these principles differ subtly across industry standard-bearers, they all converge around the following concepts: fairness and inclusivity; robustness and reliability; privacy and security; and transparency and explainability. Corporate initiatives to translate these value statements into effective governance structures, impact assessments, and deployment practices are proactively shaping the regulatory environment and the legal landscape for addressing the rapid innovation and expansion of these predictive and generative AI tools. With the introduction of new practical frameworks for managing AI risk and developing analysis of the cascading policy proposals focused on risk-based mitigation and evaluation, the tide is quickly rising on the need for regulation.
In the market for AI analytics, systems connect actors from vastly different sectors of the global economy. Governments seek to leverage AI-powered insights in their administrative and governance functions. The relationship between innovative, profit-driven vendors and demand and competition-driven customers will soon strain traditional concepts of contractual and product liability. Ethical responsibility as bespoke development and mass deployment of AI resources will also continue to accelerate. Regulation in this space should address two fundamental questions: who bears responsibility when AI systems become problematic in decisions, applications, and operations; and when and where should liability attach in the AI lifecycle?
During the 2022 Problematic AI Symposium at William & Mary Law School, Dennis Hirsch, Professor of Law and Director of the Program on Data and Governance at the Moritz College of Law at The Ohio State University, suggests industry executives view responsible management and the ethics of AI through the lens of corporate sustainability, as opposed to rote compliance. This recognition of responsibility influences corporate decision-making and shapes companies’ AI development processes to reduce regulatory risk, build and sustain trust, retain employees, improve quality and competitiveness, and demonstrate company values. However, studies have shown that looming regulations on AI also affect corporate decision-making in ways that reduce the risk tolerance of managers and the internal priority for ethical product development and the adoption of AI. Hirsch’s analysis of the developing trends in law and policy, as well as management strategies for responsible AI management, lead to a forward-looking conclusion: future regulatory proposals should prioritize shielding the incentives and impulses to innovate while advancing procedural mechanisms for holding developers accountable to the principles they proclaim publicly and avoiding burdensome and ineffective obligations.
As pressures mount for sustained innovation in AI, legislators and regulators in the US and EU might turn to check-the-box style compliance measures, which fail to reflect the active and innovative governance many companies already leverage in the design, development, and deployment of AI technologies. Creating safe harbors from liability for companies who commit to a reporting, disclosure, and monitoring scheme would move the needle beyond self-regulation. Such safe harbors also recognize and capitalize on the investment and leadership of AI developers in committing to ethical practices. Functionally, this approach would instill accountability for adhering to ethical principles. Organizations are already publicizing while also leveraging audits and impact assessments at various stages of the AI development lifecycle. Organizations would be forced to share liability for all contracting parties and stakeholders involved in designing, developing, and monitoring any AI solution or system brought to market.
Fundamentally, a safe harbor approach ensures the process of protecting principles and would allow organizations some flexibility in how they structure their oversight. Industry leaders like Cisco, Hewlett Packard Enterprise, IBM, Microsoft, and Google have already set up internal procedural infrastructure with review boards, processes for identifying and escalating uniquely risky projects, and evaluating in real-time potential and system failures. Recognizing and distinguishing structural harms and acute personal injuries from AI decision-making is critical to this scheme because these harms and injuries require diverging regulatory approaches. But, ensuring regulators, key stakeholders, and the public have access to information throughout the AI design, development, and deployment lifecycle can help not only curtail potential abuses arising out of AI deployment but also inform awareness of market participation and the potential need for sector-specific regulation.
The discussion throughout CLCT’s Problematic AI Symposium highlighted the competing perspectives, public concerns, and geopolitical pressures to calibrate legislation and regulation in this evolving space. A first step should be formalizing accountability, responsibility, and liability for corporate best practices and incentivizing their wider adoption by other organizations seeking to develop ethical AI technologies or responsibly integrate AI components into their business operations.
Written for the Fall 2023 AI Newsletter