Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Designing Better UX For Left-Handed People

      July 25, 2025

      This week in AI dev tools: Gemini 2.5 Flash-Lite, GitLab Duo Agent Platform beta, and more (July 25, 2025)

      July 25, 2025

      Tenable updates Vulnerability Priority Rating scoring method to flag fewer vulnerabilities as critical

      July 24, 2025

      Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

      July 24, 2025

      Trump’s AI plan says a lot about open source – but here’s what it leaves out

      July 25, 2025

      Google’s new Search mode puts classic results back on top – how to access it

      July 25, 2025

      These AR swim goggles I tested have all the relevant metrics (and no subscription)

      July 25, 2025

      Google’s new AI tool Opal turns prompts into apps, no coding required

      July 25, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel Scoped Route Binding for Nested Resource Management

      July 25, 2025
      Recent

      Laravel Scoped Route Binding for Nested Resource Management

      July 25, 2025

      Add Reactions Functionality to Your App With Laravel Reactions

      July 25, 2025

      saasykit/laravel-open-graphy

      July 25, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Sam Altman won’t trust ChatGPT with his “medical fate” unless a doctor is involved — “Maybe I’m a dinosaur here”

      July 25, 2025
      Recent

      Sam Altman won’t trust ChatGPT with his “medical fate” unless a doctor is involved — “Maybe I’m a dinosaur here”

      July 25, 2025

      “It deleted our production database without permission”: Bill Gates called it — coding is too complex to replace software engineers with AI

      July 25, 2025

      Top 6 new features and changes coming to Windows 11 in August 2025 — from AI agents to redesigned BSOD screens

      July 25, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»How General-Purpose AI (GPAI) Models Are Regulated

    How General-Purpose AI (GPAI) Models Are Regulated

    April 24, 2025
    1. What are General-Purpose AI (GPAI) Models?
    2. Identifying the GPAI Model with Systemic Risk
    3. Regulatory Requirements for GPAI Models Posing Systemic Risk
    4. How Does Tx Ensure Your AI Models’ Compliance with AI Regulations?

    The current state of AI advancements can be termed an unprecedented moment, as it rapidly transforms industries, with general-purpose AI (GPAI) models like Gemini and ChatGPT leading the change. However, the advancement also brings significant challenges. As per statistics, around 80-85% of AI projects fail to meet goals due to poor data quality, insufficient risk controls, and many other issues. After understanding these challenges, regulatory bodies are coming forward to ensure responsible and ethical AI development and deployment. The EU Artificial Intelligence Act (AI Act) introduces various guidelines for GPAI models, especially posing systemic risks.

    This blog will discuss the criteria for identifying GPAI models having systemic risks, EU AI regulatory requirements for such models, and how AI models can comply with evolving regulations.

    What are General-Purpose AI (GPAI) Models?

    What are General-Purpose AI (GPAI) Models?

    The EU AI Act defines GPAI models as AI models trained using large data volumes and self-supervision at scale. These models can perform multiple tasks across domains without depending on a particular functionality. General-purpose AI models can adapt and be fine-tuned to support different applications, including content creation, coding, translation, data analytics, and decision-making. Generally, there are three types of GPAI models:

    Foundational Models

    For example, GPT-4, Gemini, and Llama, trained on large-scale data and fine-tuned for downstream tasks.

    Multimodal Models

    For example, models that can generate text, image, audio, etc., by handling and integrating multiple types of input.

    Instruction-Tuned Models

    Fine-tuned to follow human instructions across general tasks.

    But, under the EU AI Act, more critical GPAI Model types have emerged:

    GPAI Models Without Systemic Risk

    Although these models are broadly capable of performing diverse tasks like code assistance, content creation, and translation, they do not fulfill the criteria of systemic impact. They still need to follow transparency and usage guidelines defined by regulatory bodies but are not subject to the highly critical obligations under the law.

    GPAI Models with Systemic Risk

    These powerful models significantly impact the economy, business operations, and society. If they are left unchecked or unregulated, the aftereffects would be severe. Systemic risks may emerge from:

    Model scale and capabilities

    Trained on extremely large datasets, the models gain advanced autonomy and can generate compelling content, making them harder to predict or control. For instance, GPT-4 or Gemini can write software code, simulate human conversations, or generate legal documents.

    Deployment Reach

    Recently, organizations have been integrating GPAI models with their critical systems to multiply their impact. For instance, a general-purpose model merged with a cloud platform, a financial service tool, or a government chatbot will significantly impact millions of users if it generates biased or harmful outputs.

    Risk of Misuse

    The more advanced the model, the higher the risk of being used unethically and maliciously. Malicious actors can use it to create misinformation, deepfakes, or exploit security gaps within an infrastructure. For instance, bad actors might use an AI model to make phishing emails, spread misinformation during a nationwide event, or manipulate stock markets.

    GPAI Models Without Systemic Risk

    Although these models are broadly capable of performing diverse tasks like code assistance, content creation, and translation, they do not fulfill the criteria of systemic impact. They still need to follow transparency and usage guidelines defined by regulatory bodies but are not subject to the highly critical obligations under the law.

    GPAI Models with Systemic Risk

    These powerful models significantly impact the economy, business operations, and society. If they are left unchecked or unregulated, the aftereffects would be severe. Systemic risks may emerge from:

    Model scale and capabilities

    Trained on extremely large datasets, the models gain advanced autonomy and can generate compelling content, making them harder to predict or control. For instance, GPT-4 or Gemini can write software code, simulate human conversations, or generate legal documents.

    Deployment Reach

    Recently, organizations have been integrating GPAI models with their critical systems to multiply their impact. For instance, a general-purpose model merged with a cloud platform, a financial service tool, or a government chatbot will significantly impact millions of users if it generates biased or harmful outputs.

    Risk of Misuse

    The more advanced the model, the higher the risk of being used unethically and maliciously. Malicious actors can use it to create misinformation, deepfakes, or exploit security gaps within an infrastructure. For instance, bad actors might use an AI model to make phishing emails, spread misinformation during a nationwide event, or manipulate stock markets.

    Identifying the GPAI Model with Systemic Risk

    General-purpose AI Models containing systemic risk will greatly impact everyone’s lives and business operations. The impact would be so significant that any negative AI incident could disturb the whole technology value chain, disrupt business operations, and impact end-users who rely on it. That’s how GPAI models are categorized into the systemic risk category. Now, the question is, “How do you calculate the impact capability?”

    The high impact of any AI model is identified when its cumulative computing power utilized during its training is greater than 10²⁵ Floating Point Operations (FLOPS). FLOPS/second is a unit that calculates a computer’s processing speed. A higher FLOPS value means higher power consumption, which also increases the risk factor. Also, FLOPS is not the only performance indicator in the EU Commission’s AI office. They will set additional benchmarks to identify and assess systemic risk.

    The AI Office will now supervise and implement the laws listed in the AI Act for GPAI Model providers (Article 88 AI Act). The AI Office will work alongside national authorities in EU countries to help them check if AI systems meet the required standards. It can ask companies to provide information about their models, carry out evaluations, and, if needed, demand changes or even remove a model from the market entirely. If GPAI model providers don’t comply, the Office can issue fines of up to 3% of a company’s worldwide annual revenue or 15 million euros, whichever is higher.

    Regulatory Requirements for GPAI Models Posing Systemic Risk

    Regulatory Requirements for GPAI Models Posing Systemic Risk

    Providers of GPAI models identified as posing systemic risk must adhere to the following obligations:

    Risk Assessment and Mitigation

    Conduct thorough evaluations of their models using standardized protocols and tools that reflect the current state of the art. This includes adversarial testing to identify and mitigate systemic risks.

    Incident Reporting

    Monitor serious incidents and report relevant information, including possible corrective measures, to the AI Office and, where necessary, to national competent authorities immediately.

    Cybersecurity Measures

    Ensure adequate cybersecurity for the model and its physical infrastructure to prevent unauthorized access and other security threats.

    Compliance Demonstration

    Until conformed standards are published, providers can use codes of practice to demonstrate compliance with the AI Act’s requirements.

    Transparency and Documentation

    Maintain detailed records of the model’s development and testing processes. Provide necessary information to downstream providers integrating the model into their AI systems while safeguarding intellectual property rights.

    How Does Tx Ensure Your AI Models’ Compliance with AI Regulations?

    With the increasing adoption of AI technologies, a critical gap also emerges ensuring AI models’ reliability, authenticity, ethics, and responsibility. At Tx, we understand these challenges and lead the way in AI quality engineering by deep diving into your general-purpose AI models’ functionalities. Our years of experience in ensuring compliance with region-specific regulatory requirements enable our clients to build GPAI AI models that are robust, secure, trustworthy, and scalable.

    General-purpose AI models, like GPT-4 and Gemini, are increasingly powerful and widely used, but they also pose risks when left unchecked. The EU AI Act sets strict rules for models with systemic impact—those using massive computing power or influencing critical sectors. These rules include mandatory risk assessments, incident reporting, and cybersecurity measures. The AI Office enforces these obligations. Tx supports organizations by helping ensure their AI models meet compliance standards through in-depth testing, governance, and responsible deployment practices. Contact our AI QE experts now to find out how Tx can assist with your AI projects.

    The post How General-Purpose AI (GPAI) Models Are Regulated first appeared on TestingXperts.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleBest Free and Open Source Alternatives to Corel Vector
    Next Article A New Citibank Report/Guide Shares How Agentic AI Will Reshape Finance with Autonomous Analysis and Intelligent Automation

    Related Posts

    Development

    Laravel Scoped Route Binding for Nested Resource Management

    July 25, 2025
    Development

    Add Reactions Functionality to Your App With Laravel Reactions

    July 25, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    A Windows 11 bug makes you say goodbye to Windows Hello, but only temporarily

    News & Updates

    CVE-2025-2521 – Honeywell Experion PKS and OneWireless WDM Remote Code Execution Buffer Overflow

    Common Vulnerabilities and Exposures (CVEs)

    The New Hacker’s List and an Old Debate: Would you Hire a Hacker?

    Development

    CVE-2025-3802 – Tenda W12 and i24 HTTPd cgiPingSet Stack-Based Buffer Overflow

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-7084 – “Belkin F9K1122 Web-based Buffer Overflow Vulnerability”

    July 7, 2025

    CVE ID : CVE-2025-7084

    Published : July 6, 2025, 4:15 p.m. | 12 hours, 44 minutes ago

    Description : A vulnerability was found in Belkin F9K1122 1.00.33. It has been declared as critical. This vulnerability affects the function formWpsStart of the file /goform/formWpsStart of the component webs. The manipulation of the argument pinCode leads to stack-based buffer overflow. The attack can be initiated remotely. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.

    Severity: 8.8 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    How to Stop Chasing Productivity and Start Moving with Purpose

    July 2, 2025

    FOSS Weekly #25.17: Ubuntu 25.04 and Fedora 42 Release Follow-ups, Logseq, ZimaBoard and More

    April 24, 2025

    CVE-2024-51360 – Apache Server PHP RCE

    May 23, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.