Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Tenable updates Vulnerability Priority Rating scoring method to flag fewer vulnerabilities as critical

      July 24, 2025

      Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

      July 24, 2025

      AI and its impact on the developer experience, or ‘where is the joy?’

      July 23, 2025

      Google launches OSS Rebuild tool to improve trust in open source packages

      July 23, 2025

      EcoFlow’s new portable battery stations are lighter and more powerful (DC plug included)

      July 24, 2025

      7 ways Linux can save you money

      July 24, 2025

      My favorite Kindle tablet just got a kids model, and it makes so much sense

      July 24, 2025

      You can turn your Google Photos into video clips now – here’s how

      July 24, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Blade Service Injection: Direct Service Access in Laravel Templates

      July 24, 2025
      Recent

      Blade Service Injection: Direct Service Access in Laravel Templates

      July 24, 2025

      This Week in Laravel: NativePHP Mobile and AI Guidelines from Spatie

      July 24, 2025

      Retrieve the Currently Executing Closure in PHP 8.5

      July 24, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.30: AUR Poisoned, Linux Rising, PPA Explained, New Open Source Grammar Checker and More

      July 24, 2025
      Recent

      FOSS Weekly #25.30: AUR Poisoned, Linux Rising, PPA Explained, New Open Source Grammar Checker and More

      July 24, 2025

      How to Open Control Panel in Windows 11

      July 24, 2025

      How to Shut Down Windows 11

      July 24, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»How General-Purpose AI (GPAI) Models Are Regulated

    How General-Purpose AI (GPAI) Models Are Regulated

    April 24, 2025
    1. What are General-Purpose AI (GPAI) Models?
    2. Identifying the GPAI Model with Systemic Risk
    3. Regulatory Requirements for GPAI Models Posing Systemic Risk
    4. How Does Tx Ensure Your AI Models’ Compliance with AI Regulations?

    The current state of AI advancements can be termed an unprecedented moment, as it rapidly transforms industries, with general-purpose AI (GPAI) models like Gemini and ChatGPT leading the change. However, the advancement also brings significant challenges. As per statistics, around 80-85% of AI projects fail to meet goals due to poor data quality, insufficient risk controls, and many other issues. After understanding these challenges, regulatory bodies are coming forward to ensure responsible and ethical AI development and deployment. The EU Artificial Intelligence Act (AI Act) introduces various guidelines for GPAI models, especially posing systemic risks.

    This blog will discuss the criteria for identifying GPAI models having systemic risks, EU AI regulatory requirements for such models, and how AI models can comply with evolving regulations.

    What are General-Purpose AI (GPAI) Models?

    What are General-Purpose AI (GPAI) Models?

    The EU AI Act defines GPAI models as AI models trained using large data volumes and self-supervision at scale. These models can perform multiple tasks across domains without depending on a particular functionality. General-purpose AI models can adapt and be fine-tuned to support different applications, including content creation, coding, translation, data analytics, and decision-making. Generally, there are three types of GPAI models:

    Foundational Models

    For example, GPT-4, Gemini, and Llama, trained on large-scale data and fine-tuned for downstream tasks.

    Multimodal Models

    For example, models that can generate text, image, audio, etc., by handling and integrating multiple types of input.

    Instruction-Tuned Models

    Fine-tuned to follow human instructions across general tasks.

    But, under the EU AI Act, more critical GPAI Model types have emerged:

    GPAI Models Without Systemic Risk

    Although these models are broadly capable of performing diverse tasks like code assistance, content creation, and translation, they do not fulfill the criteria of systemic impact. They still need to follow transparency and usage guidelines defined by regulatory bodies but are not subject to the highly critical obligations under the law.

    GPAI Models with Systemic Risk

    These powerful models significantly impact the economy, business operations, and society. If they are left unchecked or unregulated, the aftereffects would be severe. Systemic risks may emerge from:

    Model scale and capabilities

    Trained on extremely large datasets, the models gain advanced autonomy and can generate compelling content, making them harder to predict or control. For instance, GPT-4 or Gemini can write software code, simulate human conversations, or generate legal documents.

    Deployment Reach

    Recently, organizations have been integrating GPAI models with their critical systems to multiply their impact. For instance, a general-purpose model merged with a cloud platform, a financial service tool, or a government chatbot will significantly impact millions of users if it generates biased or harmful outputs.

    Risk of Misuse

    The more advanced the model, the higher the risk of being used unethically and maliciously. Malicious actors can use it to create misinformation, deepfakes, or exploit security gaps within an infrastructure. For instance, bad actors might use an AI model to make phishing emails, spread misinformation during a nationwide event, or manipulate stock markets.

    GPAI Models Without Systemic Risk

    Although these models are broadly capable of performing diverse tasks like code assistance, content creation, and translation, they do not fulfill the criteria of systemic impact. They still need to follow transparency and usage guidelines defined by regulatory bodies but are not subject to the highly critical obligations under the law.

    GPAI Models with Systemic Risk

    These powerful models significantly impact the economy, business operations, and society. If they are left unchecked or unregulated, the aftereffects would be severe. Systemic risks may emerge from:

    Model scale and capabilities

    Trained on extremely large datasets, the models gain advanced autonomy and can generate compelling content, making them harder to predict or control. For instance, GPT-4 or Gemini can write software code, simulate human conversations, or generate legal documents.

    Deployment Reach

    Recently, organizations have been integrating GPAI models with their critical systems to multiply their impact. For instance, a general-purpose model merged with a cloud platform, a financial service tool, or a government chatbot will significantly impact millions of users if it generates biased or harmful outputs.

    Risk of Misuse

    The more advanced the model, the higher the risk of being used unethically and maliciously. Malicious actors can use it to create misinformation, deepfakes, or exploit security gaps within an infrastructure. For instance, bad actors might use an AI model to make phishing emails, spread misinformation during a nationwide event, or manipulate stock markets.

    Identifying the GPAI Model with Systemic Risk

    General-purpose AI Models containing systemic risk will greatly impact everyone’s lives and business operations. The impact would be so significant that any negative AI incident could disturb the whole technology value chain, disrupt business operations, and impact end-users who rely on it. That’s how GPAI models are categorized into the systemic risk category. Now, the question is, “How do you calculate the impact capability?”

    The high impact of any AI model is identified when its cumulative computing power utilized during its training is greater than 10²⁵ Floating Point Operations (FLOPS). FLOPS/second is a unit that calculates a computer’s processing speed. A higher FLOPS value means higher power consumption, which also increases the risk factor. Also, FLOPS is not the only performance indicator in the EU Commission’s AI office. They will set additional benchmarks to identify and assess systemic risk.

    The AI Office will now supervise and implement the laws listed in the AI Act for GPAI Model providers (Article 88 AI Act). The AI Office will work alongside national authorities in EU countries to help them check if AI systems meet the required standards. It can ask companies to provide information about their models, carry out evaluations, and, if needed, demand changes or even remove a model from the market entirely. If GPAI model providers don’t comply, the Office can issue fines of up to 3% of a company’s worldwide annual revenue or 15 million euros, whichever is higher.

    Regulatory Requirements for GPAI Models Posing Systemic Risk

    Regulatory Requirements for GPAI Models Posing Systemic Risk

    Providers of GPAI models identified as posing systemic risk must adhere to the following obligations:

    Risk Assessment and Mitigation

    Conduct thorough evaluations of their models using standardized protocols and tools that reflect the current state of the art. This includes adversarial testing to identify and mitigate systemic risks.

    Incident Reporting

    Monitor serious incidents and report relevant information, including possible corrective measures, to the AI Office and, where necessary, to national competent authorities immediately.

    Cybersecurity Measures

    Ensure adequate cybersecurity for the model and its physical infrastructure to prevent unauthorized access and other security threats.

    Compliance Demonstration

    Until conformed standards are published, providers can use codes of practice to demonstrate compliance with the AI Act’s requirements.

    Transparency and Documentation

    Maintain detailed records of the model’s development and testing processes. Provide necessary information to downstream providers integrating the model into their AI systems while safeguarding intellectual property rights.

    How Does Tx Ensure Your AI Models’ Compliance with AI Regulations?

    With the increasing adoption of AI technologies, a critical gap also emerges ensuring AI models’ reliability, authenticity, ethics, and responsibility. At Tx, we understand these challenges and lead the way in AI quality engineering by deep diving into your general-purpose AI models’ functionalities. Our years of experience in ensuring compliance with region-specific regulatory requirements enable our clients to build GPAI AI models that are robust, secure, trustworthy, and scalable.

    General-purpose AI models, like GPT-4 and Gemini, are increasingly powerful and widely used, but they also pose risks when left unchecked. The EU AI Act sets strict rules for models with systemic impact—those using massive computing power or influencing critical sectors. These rules include mandatory risk assessments, incident reporting, and cybersecurity measures. The AI Office enforces these obligations. Tx supports organizations by helping ensure their AI models meet compliance standards through in-depth testing, governance, and responsible deployment practices. Contact our AI QE experts now to find out how Tx can assist with your AI projects.

    The post How General-Purpose AI (GPAI) Models Are Regulated first appeared on TestingXperts.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleBest Free and Open Source Alternatives to Corel Vector
    Next Article A New Citibank Report/Guide Shares How Agentic AI Will Reshape Finance with Autonomous Analysis and Intelligent Automation

    Related Posts

    Development

    Blade Service Injection: Direct Service Access in Laravel Templates

    July 24, 2025
    Development

    This Week in Laravel: NativePHP Mobile and AI Guidelines from Spatie

    July 24, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-4418 – AVEVA PI Connector for CygNet Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-49838 – GPT-SoVITS-WebUI Deserialize Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CISA Warns of KUNBUS Auth Bypass Vulnerabilities Exposes Systems to Remote Attacks

    Security

    Welcome to Maintainer Month: Events, exclusive discounts, and a new security challenge

    News & Updates

    Highlights

    Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer: Tasks and Input Representations Matter

    June 3, 2025

    Cross-lingual transfer is a popular approach to increase the amount of training data for NLP…

    ‘Fake’ frames? Who cares when a game looks this good — DOOM: The Dark Ages tested with NVIDIA’s Multi Frame Gen

    May 15, 2025

    Best 5 Animation Explainer Video Production Companies in Shreveport

    June 26, 2025

    My jaw dropped at this Fallout-themed DOOM mod that reimagines the classic RPG as an FPS — one of the coolest fan games I’ve ever seen

    July 22, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.