Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»The Human Factor in Artificial Intelligence AI Regulation: Ensuring Accountability

    The Human Factor in Artificial Intelligence AI Regulation: Ensuring Accountability

    June 30, 2024

    As artificial intelligence (AI) technology continues to advance and permeate various aspects of society, it poses significant challenges to existing legal frameworks. One recurrent issue is how the law should regulate entities that lack intentions. Traditional legal principles often rely on the concept of mens rea, or the mental state of the actor, to determine liability in areas such as freedom of speech, copyright, and criminal law. However, AI agents, as they currently exist, do not possess intentions in the same way humans do. This presents a potential loophole where the use of AI could be immunized from liability simply because these systems lack the requisite mental state.

    A new paper from Yale Law School, ‘Law of AI is the Law of Risky Agents without Intentions, ‘ addresses this critical problem by proposing the use of objective standards to regulate AI. These standards are drawn from various parts of the law that either ascribe intention to actors or hold them to objective standards of conduct. The core argument is that AI programs should be viewed as tools used by human beings and organizations, making these humans and organizations responsible for the AI’s actions. We need to understand that the traditional legal framework depends on the mental state of the actor to determine liability, which is not applicable to AI agents that lack intentions. The paper, therefore, suggests shifting to objective standards to bridge this gap. The author argues that humans and organizations using AI should bear the responsibility for any harm caused, similar to how principals are liable for their agents. It further emphasizes imposing duties of reasonable care and risk reduction on those who design, implement, and deploy AI technologies. There needs to be the establishment of clear legal standards and rules to ensure that companies dealing in AI internalize the costs associated with the risks their technologies impose on society.

    The paper presents an interesting comparison between AI agents and the principal-agent relationship in Tort Law, which offers a valuable framework for understanding how liability should be assigned in the context of AI technologies. In tort law, principals are held liable for the actions of their agents when those actions are performed on behalf of the principal. The doctrine of respondeat superior is a specific application of this principle, where employers are liable for the torts committed by their employees in the course of employment. When people or organizations use AI systems, these systems can be seen as agents acting on their behalf. The core idea is that the legal responsibility for the actions of AI agents should be attributed to the human principals who employ them. This ensures that individuals and companies cannot escape liability simply by using AI to perform tasks that would otherwise be done by human agents.

    Therefore, given that AI agents lack intentions, the law should hold them and their human principals to objective standards which include:

    Negligence—AI systems should be designed with reasonable care.

    Strict Liability—In certain high-risk applications like fiduciary duties, the highest level of care may be required.

    No reduced duty of care—Substituting an AI agent for a human agent should not result in a reduced duty of care. For example, if an AI makes a contract on behalf of a principal, the principal remains fully accountable for the contract’s terms and consequences.

    The paper also discusses and addresses the challenge of regulating AI programs, which inherently lack intentions, within existing legal frameworks that often rely on the concept of mens rea (the mental state of the actor) to assign liability. It says that in traditional legal contexts, the law sometimes ascribes intentions to entities that lack clear human intentions, such as corporations or associations and holds actors to external standards of behavior, regardless of their actual intentions. Therefore, the paper suggests that the law should treat AI programs as if they have intentions, presuming that they intend the reasonable and foreseeable consequence of their actions. This approach would hold AI systems accountable for outcomes in a manner similar to how human actors are treated in certain legal contexts. The paper also discusses the issue of applying subjective standards, which are typically used to protect human liberty, to AI programs. It says that the main contention is that AI programs lack the individual autonomy and political liberty that justify the use of subjective standards for human actors. It gives the example of the First Amendment protection, which balances the rights of speakers and listeners. However, the protection of AI speech based on listener rights does not justify applying subjective standards as AI lacks subjective intentions. Thus, since AI lacks subjective intentions, the law should ascribe intentions to AI programs by presuming they intend the reasonable and foreseeable consequences of their actions. The law should apply objective standards of behavior to AI programs based on what a reasonable person would do in similar circumstances which includes using standards of reasonableness.

    The paper/report presents two practical applications that AI programs should be regulated using objective standards: defamation and copyright infringement. It explores how objective standards and reasonable regulation can address liability issues arising from AI technologies. The problem it addresses here is how to determine liability for AI technologies, specifically focusing on large language models (LLMs) that can produce harmful or infringing content.

    The key components of the applications that it discusses are: 

    Defamatory Hallucinations:

    LLMs can generate false and defamatory content when prompted, but unlike humans, they lack intentions, making traditional defamation standards inapplicable. They should be treated analogously to defectively designed products. Designers of the product should be expected to implement safeguards to reduce the risk of defamatory content. Furthermore, if an AI agent acts as a prompter, a product liability approach applies. Human prompters are liable if they publish defamatory material generated by LLMs, with standard defamation laws modified to account for the nature of AI. Users must exercise reasonable care in designing prompts and verifying the accuracy of AI-generated content, refraining from disseminating known or reasonably suspected false and defamatory material.

    Copyright Infringement

    Concerns about copyright infringement have led to multiple lawsuits against AI companies. LLMs may generate content that infringes on copyrighted material, raising questions about fair use and liability. Therefore, to deal with this AI companies can secure licenses from copyright holders to use their works in training and generating new content and establish a collective rights organization could facilitate blanket licenses, but this approach has limitations due to the diverse and dispersed nature of copyright holders. Furthermore, AI companies should be required to take reasonable steps to reduce the risk of copyright infringement as a condition of a fair use defense.

    Conclusion:

    This research paper explores the legal accountability for AI technologies using principles from agency law, ascribed intentions, and objective standards. By treating AI actions similarly to human agents under agency law, we emphasize that principals must take responsibility for their AI agents’ actions, ensuring no reduction in duty of care.

    The post The Human Factor in Artificial Intelligence AI Regulation: Ensuring Accountability appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to bring elements to view automatically in Protractor or selenium
    Next Article CAT-BENCH: Evaluating Language Models’ Understanding of Temporal Dependencies in Procedural Texts

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-40906 – MongoDB BSON Serialization BSON::XS Multiple Vulnerabilities

    May 17, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    Surface Laptop 7 sale: Save big on “the clamshell form factor perfected”

    News & Updates

    Mandatory reporting for ransomware attacks? – Week in security with Tony Anscombe

    Development

    Korplug military targeted attacks: Afghanistan & Tajikistan

    Development

    PINE: Efficient Norm-Bound Verification for Secret-Shared Vectors

    Development
    Hostinger

    Highlights

    Development

    Embedding secure generative AI in mission-critical public safety applications

    November 20, 2024

    This post is co-written with Lawrence Zorio III from Mark43. Public safety organizations face the challenge…

    Figma’s AI design feature disabled after copying Apple’s weather app

    July 5, 2024

    On selecting the radio button how to get the text of the selected radio button where the text is in the next column

    July 8, 2024

    CVE-2025-44830 – EngineerCMS SQL Injection

    May 12, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.