Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Prompting Is A Design Act: How To Brief, Guide And Iterate With AI

      August 29, 2025

      Best React.js Development Services in 2025: Features, Benefits & What to Look For

      August 29, 2025

      August 2025: AI updates from the past month

      August 29, 2025

      UI automation: Why “try, try again”is your mantra

      August 29, 2025

      AI is returning to Taco Bell and McDonald’s drive-thrus – will customers bite this time?

      August 30, 2025

      I deciphered Apple’s iPhone 17 event invite – my 3 biggest theories for what’s expected

      August 30, 2025

      This Milwaukee 9-tool kit is $200 off for Labor Day – here’s what’s included

      August 30, 2025

      Massive TransUnion breach leaks personal data of 4.4 million customers – what to do now

      August 30, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Streamlining Application Automation with Laravel’s Task Scheduler

      August 30, 2025
      Recent

      Streamlining Application Automation with Laravel’s Task Scheduler

      August 30, 2025

      A Fluent Path Builder for PHP and Laravel

      August 30, 2025

      Planning Sitecore Migration: Things to consider

      August 30, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      From Novice to Pro: Mastering Lightweight Linux for Your Kubernetes Projects

      August 30, 2025
      Recent

      From Novice to Pro: Mastering Lightweight Linux for Your Kubernetes Projects

      August 30, 2025

      Microsoft AI launches MAI-Voice-1 and previews MAI-1 foundation model

      August 29, 2025

      Clipchamp Tutorial: Cut and Split Videos Quickly

      August 29, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»News & Updates»Under the hood: Exploring the AI models powering GitHub Copilot

    Under the hood: Exploring the AI models powering GitHub Copilot

    August 29, 2025

    Since its initial launch in 2021, GitHub Copilot has evolved a lot — and so have the AI models that power it. 

    When we first announced GitHub Copilot as a technical preview, OpenAI hadn’t yet launched ChatGPT. Today, AI dominates headlines and workflows alike. Amid this rapid change, our focus has remained the same: help developers stay in flow and get more done.

    Amidst all of that, we have been focused on continually improving GitHub Copilot with developers in mind. That’s meant re-evaluating what models power it, and building agentic workflows into its core experience, too. 

    In this article, we’ll look at the models that drive different parts of GitHub Copilot and the powerful infrastructure that supports Copilot’s agentic capabilities. We’ll also discuss how model selection works across various features, like agent mode, code completions, and chat. 

    Now, let’s take a look under the hood. ✨

    Want to keep track of the latest models GitHub Copilot supports?

    We’re constantly bringing new AI models to GitHub Copilot, and if you’ve been paying attention you know things change fast. This blog offers a point-in-time snapshot of the AI models we offer now, but bookmark our documentation to stay one step ahead. 

    Read the Docs >

    From Codex to multi-model: The evolution of GitHub Copilot

    When GitHub Copilot launched in 2021, it was powered by a single model: Codex, a descendant of GPT-3. 

    At the time, Codex was a revelation. Capable of understanding and generating code in the IDE with surprising fluency, Codex helped prove that AI could be a valuable tool for developers and showed a future where AI could potentially become a true coding companion.

    Since then, Copilot has transitioned away from Codex and now defaults to the latest frontier models, while also giving developers access to their choice of advanced models.

    Where it once lived firmly in the IDE as an extension to help developers with autocomplete and code generation, Copilot has evolved to become part of the GitHub platform available across developer workflows.

    Copilot can answer questions, generate tests, debug code, get assigned an issue, generate a pull request, assist with code reviews, analyze codebases, and even fix security vulnerabilities, among other things.

    Throughout all of these changes, we have focused on helping developers accomplish more, do less boilerplate work, stay in the flow, focus on the big picture, and ship higher-quality code faster.

    Why offer multiple models?

    Moving Copilot to a multi-model architecture wasn’t just about keeping up with AI advancements. It was about allowing developers to choose their preferred LLM for the task at hand, giving them flexibility in a rapidly changing environment.

    Different models excel at different tasks, and by integrating a variety of them, GitHub Copilot can now deliver more tailored, powerful experiences through features like these:

    • Baseline intelligence: GitHub Copilot now defaults to GPT-4.1 across chat, agent mode, and code completions. Optimized for speed, reasoning, and context handling, GPT-4.1 is tuned for developer workflows and supports more than 30 programming languages.
    • Frontier model access: In Pro+, Business, and Enterprise tiers, developers can choose from a wide range of advanced models via the model picker, including:
      • Anthropic: Claude Sonnet 3.5, Claude Sonnet 3.7, Claude Sonnet 3.7 Thinking, Claude Sonnet 4, Claude Opus 4 (preview), Claude Opus 4.1 (preview)
      • OpenAI: GPT-4.1, GPT-5 (preview), GPT-5 mini (preview), o3 (preview), o3-mini, o4-mini (preview)
      • Google: Gemini 2.0 Flash, Gemini 2.5 Pro

    Each option offers different trade-offs between speed, reasoning depth, and multimodal capabilities.

    Why developer choice matters in agentic workflows

    Because Copilot supports multiple models, developers have the autonomy to choose exactly how they build, whether they’re prioritizing speed, precision, or creativity. This flexibility lets developers tailor their experience based on their preferences — and these developer experience (DevEx) improvements translate into real productivity gains. 

    Copilot’s agentic capabilities mean that:

    • Developers no longer need to switch editors or even leave GitHub. Copilot is GitHub native, so it operates directly inside your IDE and in GitHub, which makes it easy to delegate tasks without breaking your flow. 
    • Developers can work exactly how they prefer, whether that means automating tasks with Copilot, accepting suggested fixes, or stepping back and letting coding agent take over. 
    • Copilot can operate with full context into your repositories, analyze and index your codebases, respect branch protections, and fit seamlessly into your existing review cycles.
    • Copilot handles the busywork — from triaging comments to patching vulnerabilities or chasing down cross-repo blockers — so developers can stay focused on what matters most.

    Agentic workflows help reduce complexity and prioritize developer choice at every step, leading to higher-quality code and fewer to-dos. This empowers developers to work the way they want: faster, safer, and with more confidence.

    Delivering real-world impact through better DevEx

    GPT-4.1 was recently integrated into Copilot Chat and code completions, offering improved context windows and faster response times. These improvements translate directly into less waiting, more accurate assistance, and a smoother workflow.

    As AI continues to evolve, its role in shaping the developer experience will only grow. From reducing context switching to automating repetitive tasks, AI tools like Copilot are increasingly becoming a “second brain” for developers.

    Having a choice of models lets developers customize exactly how they work. This lets them build with confidence, drive even more impact, and find greater satisfaction in their work.

    How model selection works in Copilot

    GitHub Copilot is more than just one single AI model. It’s a dynamic platform that uses intelligence to match the right model with the right task. This flexibility is central to delivering a seamless DevEx, and it’s guided by a deep understanding of how developers work, what they need, and when they need it.

    Matching models to features

    Each Copilot feature is powered by a model specifically chosen to meet its unique demands:

    • Code completions
      • Copilot code completions now default to GPT-4.1, optimized for speed, accuracy, and relevance across more than 30 programming languages.
    • Agent mode
      • Designed for more complex, multi-step tasks, agent mode taps into models with advanced reasoning and planning capabilities.
      • Agent mode defaults to GPT-4.1, but developers can select other advanced models for specialized needs, such as Anthropic’s Claude Sonnet series, Google’s Gemini 2.0 Flash, or OpenAI’s reasoning models (o1, o3-mini), where available.
    • Copilot Chat
      • For natural language queries about code and development concepts, Copilot Chat relies on models with strong language understanding and generation abilities. This is why it runs on GPT-4.1 by default with the option to select other models including Claude 3.5/3.7 Sonnet, Claude 3.7 Sonnet Thinking, Gemini 2.0 Flash, o1, and o3-mini. These are all designed for tailored experiences depending on tier.
    • Coding agent (new)
      • When developers delegate tasks like triaging issues, generating pull requests, or patching vulnerabilities, Copilot coding agent can be a helpful tool by turning Copilot into a peer you can hand off tasks to (it’s also available via the Agent panel in github.com). 
    • Code review (new) 
      • Code reviews are a new addition to the GitHub Copilot feature set, and are powered by GPT-4.1, because of how it balances accuracy and responsiveness. For deeper reasoning across large codebases, developers can opt into models like Claude 3.7 Sonnet or Claude 3.7 Sonnet Thinking.

    Choosing the right model for the job

    Development tasks vary in complexity and context. That’s why GitHub Copilot empowers users to select the model that best suits their needs, especially in Chat and agent mode.

    Whether you’re optimizing for speed, reasoning depth, or multimodal input, there’s a model for you:

    ModelBest for:
    o4-mini (OpenAI)Speed, low-latency completions
    GPT-4.1 (OpenAI)Balanced performance and multimodal support
    GPT-5 mini (OpenAI)Lightweight reasoning
    GPT-5 (OpenAI)High-end reasoning for complex tasks
    o3 (OpenAI)Advanced planning and multi-step reasoning
    Claude Sonnet 3.5Reliable, everyday coding tasks
    Claude Sonnet 3.7Deeper reasoning for large codebases
    Claude Sonnet 3.7 ThinkingLong-horizon, structured problem-solving
    Claude Sonnet 4Higher reasoning depth
    Claude Opus 4Premium reasoning power
    Claude Opus 4.1Most advanced Anthropic option
    Gemini 2.0 Flash (Google)Fast, multimodal capabilities
    Gemini 2.5 ProAdvanced multimodal reasoning

    Recent upgrades: GPT-4.1 across the board

    Copilot Chat, code completions, and pull request summaries now run on GPT-4.1 from OpenAI.

    This upgrade brings:

    • Faster responses: ~40% faster response generation than GPT‑4o
    • Increased context windows: Supports significantly more context than previous defaults.

    We’ve expanded premium model access for Pro+, Business, and Enterprise plans to include a broad range of powerful AI options. These include Anthropic’s Claude Sonnet 3.5, Claude Sonnet 3.7, Claude Sonnet 3.7 Thinking, Claude Sonnet 4, and Claude Opus 4 (preview); Google’s Gemini 2.0 Flash and Gemini 2.5 Pro; and OpenAI’s GPT‑4.1, GPT‑5 (preview), GPT‑5 mini (preview), o3 (preview), o3‑mini, o4‑mini (preview).

    These models are available via premium requests, in addition to unlimited access to agent mode, context-driven chat, and code completions using our base model. With the new Pro+ tier, individual developers can unlock the full potential of these advanced models.

    Take this with you

    As the world of AI keeps evolving, so will the models that power GitHub Copilot. We’re committed to continuously refining and updating our AI infrastructure to provide you with the best possible developer experience. 

    We encourage you to explore all the different models available within Copilot and discover how they can enhance your coding journey. Happy building! 

    Interested in trying GitHub Copilot? Read the Docs to learn more about Copilot features or get started today.

    The post Under the hood: Exploring the AI models powering GitHub Copilot appeared first on The GitHub Blog.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleNetPeek is a New, User-Friendly Network Scanner for Linux
    Next Article Diurnals – get daily Todoist notifications

    Related Posts

    News & Updates

    AI is returning to Taco Bell and McDonald’s drive-thrus – will customers bite this time?

    August 30, 2025
    News & Updates

    I deciphered Apple’s iPhone 17 event invite – my 3 biggest theories for what’s expected

    August 30, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2013-1440 – CVE-2022-26237: Microsoft Windows DNS Server Remote Code Execution Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-53827 – Ericsson Packet Core Controller Denial of Service

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4634 – Airpointer Local File Inclusion Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-26892 – dkszone Celestial Aura Unrestricted File Upload RCE

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Active! Mail RCE flaw exploited in attacks on Japanese orgs

    April 22, 2025

    Active! Mail RCE flaw exploited in attacks on Japanese orgs

    An Active! Mail zero-day remote code execution vulnerability is actively exploited in attacks on large organizations in Japan.
    Active! mail is a web-based email client developed initially by TransWARE …
    Read more

    Published Date:
    Apr 22, 2025 (4 hours, 36 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-42599

    CVE-2025-6391 – Brocade ASCG Log File Exposed JWT Vulnerability

    July 17, 2025

    CVE-2025-34511 – Sitecore PowerShell Extensions Remote File Upload Vulnerability

    June 17, 2025

    Modern async iteration in JavaScript with Array.fromAsync()

    July 14, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.