Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Tenable updates Vulnerability Priority Rating scoring method to flag fewer vulnerabilities as critical

      July 24, 2025

      Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

      July 24, 2025

      AI and its impact on the developer experience, or ‘where is the joy?’

      July 23, 2025

      Google launches OSS Rebuild tool to improve trust in open source packages

      July 23, 2025

      EcoFlow’s new portable battery stations are lighter and more powerful (DC plug included)

      July 24, 2025

      7 ways Linux can save you money

      July 24, 2025

      My favorite Kindle tablet just got a kids model, and it makes so much sense

      July 24, 2025

      You can turn your Google Photos into video clips now – here’s how

      July 24, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Blade Service Injection: Direct Service Access in Laravel Templates

      July 24, 2025
      Recent

      Blade Service Injection: Direct Service Access in Laravel Templates

      July 24, 2025

      This Week in Laravel: NativePHP Mobile and AI Guidelines from Spatie

      July 24, 2025

      Retrieve the Currently Executing Closure in PHP 8.5

      July 24, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      FOSS Weekly #25.30: AUR Poisoned, Linux Rising, PPA Explained, New Open Source Grammar Checker and More

      July 24, 2025
      Recent

      FOSS Weekly #25.30: AUR Poisoned, Linux Rising, PPA Explained, New Open Source Grammar Checker and More

      July 24, 2025

      How to Open Control Panel in Windows 11

      July 24, 2025

      How to Shut Down Windows 11

      July 24, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»News & Updates»Solving the inference problem for open source AI projects with GitHub Models

    Solving the inference problem for open source AI projects with GitHub Models

    July 23, 2025

    AI features can make an open source project shine. At least, until setup asks for a paid inference API key.  Requiring contributors or even casual users to bring their own large language model (LLM) key stops adoption in its tracks:

    $ my-cool-ai-tool
    Error: OPENAI_API_KEY not found

    Developers may not want to buy a paid plan just to try out your tool, and self hosting a model can be too heavy for laptops or GitHub Actions runners. 

    GitHub Models solves that friction with a free, OpenAI-compatible inference API that every GitHub account can use with no new keys, consoles, or SDKs required. In this article, we’ll show you how to drop it into your project, run it in CI/CD, and scale when your community takes off.

    Let’s jump in.

    The hidden cost of “just add AI”

    AI features feel ubiquitous today, but getting them running locally is still a challenge for a few reasons:

    • Paid APIs: The simplest path is to ask users for an OpenAI or Anthropic key. That’s a non-starter for many hobbyists and students because paid APIs are too expensive.
    • Local models: Running a 2 B-parameter LLM can work for lightweight tasks, but anything that requires more intelligence will quickly blow past typical laptop memory — let alone the 14 GB container that backs a GitHub Actions runner.
    • Docker images and weights: You can bundle a model with your app, but distributing multi-gigabyte weights balloons install size and slows CI.

    Every additional requirement filters out potential users and contributors. What you need is an inference endpoint that’s:

    1. Free for public projects
    2. Compatible with existing OpenAI SDKs
    3. Available wherever your code runs, like your laptop, server, or Actions runner

    That’s what GitHub Models provides.

    GitHub Models in a nutshell

    • What it is: A REST endpoint that speaks the chat/completions spec you already know.
    • What you get: A curated set of models (GPT-4o, DeepSeek-R1, Llama 3, and more) hosted by GitHub.
    • Who can call it: Anyone with a GitHub Personal Access Token (PAT), or a repository’s built-in GITHUB_TOKEN when you opt-in via permissions.
    • How much it costs: Free tier for all personal accounts and OSS orgs; metered paid tier unlocks higher throughput and larger context windows.

    Because the API mirrors OpenAI’s, any client that accepts a baseURL will work without code changes. This includes OpenAI-JS, OpenAI Python, LangChain, llamacpp, or your own curl script.

    How to get started with GitHub Models

    Since GitHub Models is compatible with the OpenAI chat/completions API, almost every inference SDK can use it. To get started, you can use the OpenAI SDK:

    import OpenAI from "openai";
    
    const openai = new OpenAI({
      baseURL: "https://models.github.ai/inference/chat/completions",
      apiKey: process.env.GITHUB_TOKEN  // or any PAT with models:read
    });
    
    const res = await openai.chat.completions.create({
      model: "openai/gpt-4o",
      messages: [{ role: "user", content: "Hi!" }]
    });
    console.log(res.choices[0].message.content);

    If you write your AI open source software with GitHub Models as an inference provider, all GitHub users will be able to get up and running with it just by supplying a GitHub Personal Access Token (PAT).

    And if your software runs in GitHub Actions, your users won’t even need to supply a PAT. By requesting the models: read permission in your workflow file, the built-in GitHub token will have permissions to make inference requests to GitHub Models. This means you can build a whole array of AI-powered Actions that can be shared and installed with a single click. For instance:

    • Code review or PR triage bots
    • Smart issue tagging workflows
    • Weekly repository activity report generators
    • And anything else that a GitHub Action can do

    Plus, using GitHub Models makes it easy for your users to set up AI inference. And that has another positive effect: it’s easier for your contributors to set up AI inference as well. When anyone with a GitHub account can run your code end to end, you’ll be able to get contributions from the whole range of GitHub users, not just the ones with an OpenAI key.

    Zero-configuration CI with GitHub Actions

    Publishing an Action that relies on AI used to require users to add their inference API key as a GitHub Actions secret. Now you can ship a one-click install:

    yaml 
    
    # .github/workflows/triage.yml
    permissions:
      contents: read
      issues: write
      models: read   # 👈 unlocks GitHub Models for the GITHUB_TOKEN
    
    jobs:
      triage:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v4
          - name: Smart issue triage
            run: node scripts/triage.js

    The runner’s GITHUB_TOKEN carries the models:read scope, so your Action can call any model without extra setup. This makes it well suited for:

    • Automated pull request summaries
    • Issue deduplication and tagging
    • Weekly repository digests
    • Anything else you can script in an Action

    Scaling when your project takes off

    The GitHub Models inference API is free for everyone. But if you or your users want to do more inference than the free rate limits allow, you can turn on paid inference in your settings for significantly larger context windows and higher requests-per-minute. 

    When your community grows, so will traffic. So it’s important to consider the following: 

    • Requests per minute (RPM): While the free tier offers default limits, the paid tier offers multiples higher.
    • Context window: Free tier tops out at standard model limits; paid enables 128k tokens on supported models.
    • Latency: The paid tier runs in its own separate deployment, so you’re not in the same queue as free tier users.

    To get started, you can enable paid usage in Settings > Models for your org or enterprise. Your existing clients and tokens will keep working (but they’ll be faster and support bigger contexts).

    Take this with you

    LLMs are transforming how developers build and ship software, but requiring users to supply their own paid API key can be a barrier to entry. The magic only happens when the first npm install, cargo run, or go test just works.

    If you maintain an AI-powered open source codebase, you should consider adding GitHub Models as a default inference provider. Your users already have free AI inference via GitHub, so there’s little downside to letting them use it with your code. That’s doubly true if your project is able to run in GitHub Actions. The best API key is no API key!

    By making high-quality inference a free default for every developer on GitHub, GitHub Models gets rid of the biggest blocker to OSS AI adoption. And that opens the door to more contributions, faster onboarding, and happier users.

    Want to give it a try? Check out the GitHub Models documentation or jump straight into the API reference and start shipping AI features that just work today.

    Want to give it a try? Check out the GitHub Models documentation or jump straight into the API reference and start shipping AI features that just work today.

    The post Solving the inference problem for open source AI projects with GitHub Models appeared first on The GitHub Blog.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHurry Curry! – multiplayer co-op coooking game
    Next Article GitHub Wants the EU to Fund Open Source

    Related Posts

    News & Updates

    EcoFlow’s new portable battery stations are lighter and more powerful (DC plug included)

    July 24, 2025
    News & Updates

    7 ways Linux can save you money

    July 24, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Windows 8 tiles were ahead of their time — The Xbox handheld could be the perfect place for a similar interface

    News & Updates

    CVE-2025-4998 – H3C Magic R200G HTTP POST Request Handler Denial of Service Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Benchmarking the Banana Pi BPI-F3 Single Board Computer

    Linux

    CVE-2025-5603 – Campcodes Hospital Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    News & Updates

    NZXT’s latest PC case is the one I’ve been dreaming about, but it might not be right for you

    May 20, 2025

    NZXT refreshed its H9 Flow RGB+ for 2025, and I was able to build my…

    CVE-2025-48390 – FreeScout Remote Code Injection Vulnerability

    May 29, 2025

    How to Build Production-Ready Full Stack Apps with the MERN Stack

    July 7, 2025

    AI Consulting for Startups in Bangalore: The Ultimate Guide to Market Dominance with Digital Marketing Legend Srinidhi Ranganathan

    July 4, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.