GitHub Copilot has long leveraged different large language models (LLMs) for different use cases. The first public version of Copilot was launched using Codex, an early version of OpenAI GPT-3, specifically fine-tuned for coding tasks. Copilot Chat was launched in 2023 with GPT-3.5 and later GPT-4. Since then, we have updated the base model versions multiple times, using a range from GPT 3.5-turbo to GPT 4o and 4o-mini models for different latency and quality requirements.
In the past year, we experienced a boom in high-quality small and large language models that individually excel at different programming tasks. It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice. GitHub is committed to its ethos as an open developer platform, and ensuring every developer has the agency to build with the models that work best for them. Today at GitHub Universe, we delivered just that.
We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.
Whether it’s in VS Code or on GitHub.com, individual developers can now decide which models work best for them, while organizations and enterprises have full control over which models they enable for their team. Try multi-model Copilot today.
Anthropic’s Claude 3.5 Sonnet
Anthropic’s new Claude 3.5 Sonnet excels at coding tasks across the entire software development lifecycle—from initial design to bug fixes, maintenance to optimizations. Claude 3.5 Sonnet demonstrates high proficiency with complex and multi-step coding tasks, handling everything from legacy app updates to code refactoring and feature development.
Google’s Gemini 1.5 Pro
The latest Gemini models from Google show high capabilities in coding scenarios. Gemini 1.5 Pro features a two-million-token context window and is natively multi-modal—with the ability to process code, images, audio, video, and text simultaneously. Gemini 1.5 Pro also delivers impressive response times for regular code suggestions, documentation, and explaining code.
OpenAI’s o1-preview and o1-mini
OpenAI o1-preview and o1-mini are part of a new series of AI models equipped with more advanced reasoning capabilities than GPT 4o. During our exploration using o1-preview with GitHub Copilot, we found the model’s reasoning capabilities allow for a deeper understanding of code constraints and edge cases, producing efficient and quality results.
With GitHub Copilot, the developer is in control. Now you can also control which foundational LLM you use, all with a single login and a single subscription. Try multi-model Copilot today.
First glimpse: multi-model choice for GitHub Spark
In pursuit of GitHub’s vision to reach 1 billion developers, today at Universe we introduced GitHub Spark: the AI-native tool to build applications entirely in natural language. Sparks are fully functional micro apps that can integrate AI features and external data sources without requiring any management of cloud resources. Utilizing a creativity feedback loop, users start with an initial prompt, see live previews of their app as it’s built, easily see options for each of their requests, and automatically save versions of each iteration so they can compare versions as they go.
Here’s a first glimpse, or spark 😀, of GitHub Spark.
The post Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview appeared first on The GitHub Blog.
Source: Read MoreÂ