As artificial intelligence (AI) technology continues to advance and permeate various aspects of society, it poses significant challenges to existing legal frameworks. One recurrent issue is how the law should regulate entities that lack intentions. Traditional legal principles often rely on the concept of mens rea, or the mental state of the actor, to determine liability in areas such as freedom of speech, copyright, and criminal law. However, AI agents, as they currently exist, do not possess intentions in the same way humans do. This presents a potential loophole where the use of AI could be immunized from liability simply because these systems lack the requisite mental state.
A new paper from Yale Law School, ‘Law of AI is the Law of Risky Agents without Intentions, ‘ addresses this critical problem by proposing the use of objective standards to regulate AI. These standards are drawn from various parts of the law that either ascribe intention to actors or hold them to objective standards of conduct. The core argument is that AI programs should be viewed as tools used by human beings and organizations, making these humans and organizations responsible for the AI’s actions. We need to understand that the traditional legal framework depends on the mental state of the actor to determine liability, which is not applicable to AI agents that lack intentions. The paper, therefore, suggests shifting to objective standards to bridge this gap. The author argues that humans and organizations using AI should bear the responsibility for any harm caused, similar to how principals are liable for their agents. It further emphasizes imposing duties of reasonable care and risk reduction on those who design, implement, and deploy AI technologies. There needs to be the establishment of clear legal standards and rules to ensure that companies dealing in AI internalize the costs associated with the risks their technologies impose on society.
The paper presents an interesting comparison between AI agents and the principal-agent relationship in Tort Law, which offers a valuable framework for understanding how liability should be assigned in the context of AI technologies. In tort law, principals are held liable for the actions of their agents when those actions are performed on behalf of the principal. The doctrine of respondeat superior is a specific application of this principle, where employers are liable for the torts committed by their employees in the course of employment. When people or organizations use AI systems, these systems can be seen as agents acting on their behalf. The core idea is that the legal responsibility for the actions of AI agents should be attributed to the human principals who employ them. This ensures that individuals and companies cannot escape liability simply by using AI to perform tasks that would otherwise be done by human agents.
Therefore, given that AI agents lack intentions, the law should hold them and their human principals to objective standards which include:
Negligence—AI systems should be designed with reasonable care.
Strict Liability—In certain high-risk applications like fiduciary duties, the highest level of care may be required.
No reduced duty of care—Substituting an AI agent for a human agent should not result in a reduced duty of care. For example, if an AI makes a contract on behalf of a principal, the principal remains fully accountable for the contract’s terms and consequences.
The paper also discusses and addresses the challenge of regulating AI programs, which inherently lack intentions, within existing legal frameworks that often rely on the concept of mens rea (the mental state of the actor) to assign liability. It says that in traditional legal contexts, the law sometimes ascribes intentions to entities that lack clear human intentions, such as corporations or associations and holds actors to external standards of behavior, regardless of their actual intentions. Therefore, the paper suggests that the law should treat AI programs as if they have intentions, presuming that they intend the reasonable and foreseeable consequence of their actions. This approach would hold AI systems accountable for outcomes in a manner similar to how human actors are treated in certain legal contexts. The paper also discusses the issue of applying subjective standards, which are typically used to protect human liberty, to AI programs. It says that the main contention is that AI programs lack the individual autonomy and political liberty that justify the use of subjective standards for human actors. It gives the example of the First Amendment protection, which balances the rights of speakers and listeners. However, the protection of AI speech based on listener rights does not justify applying subjective standards as AI lacks subjective intentions. Thus, since AI lacks subjective intentions, the law should ascribe intentions to AI programs by presuming they intend the reasonable and foreseeable consequences of their actions. The law should apply objective standards of behavior to AI programs based on what a reasonable person would do in similar circumstances which includes using standards of reasonableness.
The paper/report presents two practical applications that AI programs should be regulated using objective standards: defamation and copyright infringement. It explores how objective standards and reasonable regulation can address liability issues arising from AI technologies. The problem it addresses here is how to determine liability for AI technologies, specifically focusing on large language models (LLMs) that can produce harmful or infringing content.
The key components of the applications that it discusses are:Â
Defamatory Hallucinations:
LLMs can generate false and defamatory content when prompted, but unlike humans, they lack intentions, making traditional defamation standards inapplicable. They should be treated analogously to defectively designed products. Designers of the product should be expected to implement safeguards to reduce the risk of defamatory content. Furthermore, if an AI agent acts as a prompter, a product liability approach applies. Human prompters are liable if they publish defamatory material generated by LLMs, with standard defamation laws modified to account for the nature of AI. Users must exercise reasonable care in designing prompts and verifying the accuracy of AI-generated content, refraining from disseminating known or reasonably suspected false and defamatory material.
Copyright Infringement
Concerns about copyright infringement have led to multiple lawsuits against AI companies. LLMs may generate content that infringes on copyrighted material, raising questions about fair use and liability. Therefore, to deal with this AI companies can secure licenses from copyright holders to use their works in training and generating new content and establish a collective rights organization could facilitate blanket licenses, but this approach has limitations due to the diverse and dispersed nature of copyright holders. Furthermore, AI companies should be required to take reasonable steps to reduce the risk of copyright infringement as a condition of a fair use defense.
Conclusion:
This research paper explores the legal accountability for AI technologies using principles from agency law, ascribed intentions, and objective standards. By treating AI actions similarly to human agents under agency law, we emphasize that principals must take responsibility for their AI agents’ actions, ensuring no reduction in duty of care.
The post The Human Factor in Artificial Intelligence AI Regulation: Ensuring Accountability appeared first on MarkTechPost.
Source: Read MoreÂ