Google’s Agent2Agent protocol finds new home at the Linux Foundation
At the Open Source Summit North America, it was announced that Google donated its Agent2Agent (A2A) protocol to the Linux Foundation.
The A2A protocol offers a standard way for connecting agents to each other. In this way, it complements Anthropic’s Model Context Protocol (MCP), which provides a way to connect agents to different data sources and applications.
“Drawing on Google’s internal expertise in scaling agentic systems, we designed the A2A protocol to address the challenges we identified in deploying large-scale, multi-agent systems for our customers. A2A empowers developers to build agents capable of connecting with any other agent built using the protocol and offers users the flexibility to combine agents from various providers,” Google wrote in a blog post when it first launched A2A in April.
OpenAI adds Deep Research and Webhooks to the API
The addition of Deep Research will enable developers to build research agents that find, analyze, and synthesize data.
Webhooks were also added, enabling developers to receive notifications for API events like completed responses, fine-tuning jobs, and batch jobs.
Additionally, the company is dropping the price for web search and adding it into more models. It costs $10 / 1k tool calls in o3, o3-pro, and o4 mini, and $25 / 1k tool calls in GPT-4o and GPT-4.1.
Anthropic adds ability to host and share Claude apps in its platform
Now, developers will be able to not only interact with Claude, but also use it to build, host, and share their creations, eliminating the need to worry about hosting it themselves.
Users will authenticate with their own Claude account, and their API usage will count against their subscription instead of the app developer being charged.
Qodo launches CLI agent framework
Qodo, maker of an AI coding platform, today announced the release of Qodo Gen CLI, an agent framework that enables developers to create, customize, and deploy their own AI coding agents.
With the framework, creating agents can be done by writing configuration files that add autonomous AI agents throughout the software development life cycle, according to the company’s announcement.
Qodo was built to help developers add autonomous coding capabilities to their applications without requiring expertise in AI systems, which can lead to solutions that sync up with an organization’s requirements, the company said. With Qodo Gen CLI, developers can define custom agents and what tools they can access, specify actions that trigger the agents, what instructions guide their behavior and ultimately, what their outputs should be.
Warp 2.0 evolves terminal experience into an Agentic Development Environment
Warp is undergoing a significant transformation with its 2.0 launch, shifting from its origins as a terminal emulator with AI integrations into an Agentic Development Environment (ADE).
It consists of four main capabilities: Code, Agents, Terminal, and Drive. Any of those can be initiated from the main interface, which accepts both prompts and terminal commands.
“The products on the market today, from AI IDEs to CLI coding agents, all miss the mark supporting this workflow. They bolt agents onto code editors through chat panels and bury them in CLI apps. What’s needed is a product native to the agentic workflow; one primarily designed for prompting, multi-threading, agent management, and human-agent collaboration across real-world codebases and infrastructure,” Zach Lloyd, the company’s CEO and founder, wrote in a blog post.
Agent Mode for Gemini added to Android Studio
With Agent Mode, a developer can describe a complex goal, then the agent will come up with an execution plan and then complete the tasks.
Examples of tasks Agent Mode can tackle include building a project and fixing errors, extracting hardcoded strings and migrating them to strings.xml, adding support for dark mode to an app, and implementing a new screen in an app from a screenshot.
Developers will have the ability to review, accept, or reject any of the agent’s proposed changes, or ask it to iterate on their feedback. There is also an auto-approve feature that can be enabled for situations when a developer wants to iterate quickly on ideas.
Vercel Agent launches in limited beta
The Vercel Agent is an AI assistant that analyzes Vercel app performance and security data.
It can summarize anomalies, identify likely root causes, and recommend remediation actions across the entire platform, from managing firewall rules to identifying optimization opportunities.
Tricentis Agentic Test Automation
This is a new AI agent that can generate test cases automatically, leveraging text-based prompts as well as prior test runs. It also makes use of Tricentis’ Vision AI technology to interpret visual elements across platforms, and integrates with Tricentis Tosca.
Additionally, the company launched a remote MCP server and a beta for its AI workflows capability that enables better communication between agents and humans.
Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite in preview
According to Google, no changes have been made to Pro and Flash since the last preview, except for the pricing for Flash is different. When these models were first announced, there was separate thinking and non-thinking pricing, but Google said that separation led to confusion among developers.
The new pricing for 2.5 Flash is the same for both thinking and non-thinking modes. The prices are now $0.30/1 million input tokens for text, image, and video, $1.00/1 million input tokens for audio, and $2.50/1 million output tokens for all. This represents an increase in input cost and a decrease in output cost.
Google also introduced a preview of Gemini 2.5 Flash-Lite, which has the lowest latency and cost among the 2.5 models. The company sees this as a cost-effective upgrade from 1.5 and 2.0 Flash, with better performance across most evaluations, lower time to first token, and higher tokens per second decode.
Gemini 2.5 Flash-Lite also allows users to control the thinking budget via an API parameter. Since the model is designed for cost and speed efficiency, thinking is turned off by default.
GitHub Copilot Spaces arrive
GitHub Copilot Spaces allow developers to bundle the context Copilot should read into a reusable space, which can include things like code, docs, transcripts, or sample queries.
Once the space is created, every chat, completion, or command Copilot works from will be grounded in that knowledge, enabling it to produce “answers that feel like they came from your organization’s resident expert instead of a generic model,” GitHub explained.
Copilot Spaces will be free during its public preview and won’t count against Copilot seat entitlements when the base model is used.
OpenAI improves prompting in API
The company has now made it easier to reuse, share, save, and manage prompts in the API by making prompts an API primitive.
Prompts can be reused across the Playground, API, Evals, and Stored Completions. The Prompt object can also be referenced in the Responses API and OpenAI’s SDKs.
Additionally, the Playground now has a button that will optimize the prompt for use in the API.
“By unifying prompts across our surfaces, we hope these changes will help you refine and reuse prompts better—and more promptly,” OpenAI wrote in a post.
Syncfusion releases Code Studio
Code Studio is an AI-powered code editor that differs from other options available by having the LLM utilize Syncfusion’s library of over 1,900 pre-tested UI components rather than generating code from scratch.
It offers four different assist modes: Autocomplete, Chat, Edit, and Agent. It works with models from OpenAI, Anthropic, Google, Mistral, and Cohere, as well as self-hosted models. It also comes with governance capabilities like role-base access, audit logging, and an admin console that provides usage insights.
“Code Studio began as an in-house tool and today writes up to a third of our code,” said Daniel Jebaraj, CEO of Syncfusion. “We created a secure, model-agnostic assistant so enterprises can plug it into their stack, tap our proven UI components, and ship cleaner features in less time.”
AI Alliance splits into two new non-profits
The AI Alliance is a collaborative effort among over 180 organizations across research, academic, and industry, including Carnegie Mellon University, Hugging Face, IBM, and Meta. It has now been incorporated into a 501(c)(3) research and education lab and a 501(c)(6) AI technology and advocacy organization.
The research and education lab will focus on “managing and supporting scientific and open-source projects that enable open community experimentation and learning, leading to better, more capable, and accessible open-source and open data foundations for AI.”
The technology and advocacy organization will focus on “global engagement on open-source AI advocacy and policy, driving technology development, industry standards and best practices.”
Digital.ai introduces Quick Protect Agent
Quick Protect Agent is a mobile application security agent that follows the recommendations of OWASP MASVS, an industry standard for mobile app security. Examples of OWASP MASVS protections include obfuscation, anti-tampering, and anti-analysis.
“With Quick Protect Agent, we are expanding application protection to a broader audience, enabling organizations both large and small to add powerful protections in just a few clicks,” said Derek Holt, CEO of Digital.ai. “In today’s AI world, all apps are at risk, and by democratizing our app hardening capabilities, we are enabling the protection of more applications across a broader set of industries. With eighty-three percent of applications under constant attack – the continued innovation within our core offerings, including the launch of our new Quick Protect Agent, could not be coming at a more crucial time.”
IBM launches new integration to help unify AI security and governance
It is integrating its watsonx.governance and Guardium AI security solutions so that companies can manage both from a single tool. The integrated solution will be able to validate against 12 different compliance frameworks, including the EU AI Act and ISO 42001.
Guardium AI Security is being updated to be able to detect new AI use cases in cloud environments, code repositories, and embedded systems. Then, it can automatically trigger the appropriate governance workflows from watsonx.governance.
“AI agents are set to revolutionize enterprise productivity, but the very benefits of AI agents can also present a challenge,” said Ritika Gunnar, general manager of data and AI at IBM. “When these autonomous systems aren’t properly governed or secured, they can carry steep consequences.”
Secure Code Warrior introduces AI Security Rules
This new ruleset will provide developers with guidance for using AI coding assistants securely. It enables them to establish guardrails that discourage the AI from risky patterns, such as unsafe eval usage, insecure authentication flows, or failure to use parameterized queries.
They can be adapted to use with a variety of coding assistants, including GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf.
The rules can be used as-is or adapted to a company’s tech stack or workflow so that AI-generated output better aligns across projects and contributors.
“These guardrails add a meaningful layer of defense, especially when developers are moving fast, multitasking, or find themselves trusting AI tools a little too much,” said Pieter Danhieux, co-founder and CEO of Secure Code Warrior. “We’ve kept our rules clear, concise and strictly focused on security practices that work across a wide range of environments, intentionally avoiding language or framework-specific guidance. Our vision is a future where security is seamlessly integrated into the developer workflow, regardless of how code is written. This is just the beginning.”
SingleStore adds new capabilities for deploying AI
The company has improved the overall data integration experience by allowing customers to use SingleStore Flow within Helios to move data from Snowflake, Postgres, SQL Server, Oracle, and MySQL to SingleStore.
It also improved the integration with Apache Iceberg by adding a speed layer on top of Iceberg to improve data exchange speeds.
Other new features include the ability for Aura Container Service to host Cloud Functions and Inference APIs, integration with GitHub, Notebooks scheduling and versioning, an updated billing forecasting UI, and easier pipeline monitoring and sequences.
Apple announces Foundation Models framework
The Foundation Models framework will enable any app developer to utilize Apple Intelligence’s on-device processing for their apps. Apple Intelligence leverages Apple’s silicon processors to be able to process data without it leaving the device, ensuring better data privacy for users.
According to Apple, Automattic’s journaling app Day One utilized the framework to add intelligence features that are privacy-centric.
“The Foundation Model framework has helped us rethink what’s possible with journaling,” said Paul Mayne, head of Day One at Automattic. “Now we can bring intelligence and privacy together in ways that deeply respect our users.”
The framework natively supports Swift, and includes capabilities such as guided generation and tool calling.
Mistral launches its first reasoning model
Magistral is a reasoning model that excels in “domain-specific, transparent, and multilingual reasoning,” according to the company.
It comes in two different sizes: Magistral Small is a 24B parameter version and Magistral Medium is a more powerful version for the enterprise. Magistral Small is also open source so that the community can build upon its architecture and reasoning processes.
New Relic adds MCP support in AI Monitoring solution
What this means is that developers who are building agents with MCP will be able to get insights into the life cycle of an MCP request, which encompasses invoked tools, call sequences, and execution durations.
With this new capability, developers will be able to correlate MCP performance with the entire application ecosystem. It will enable them to track usage patterns, latency, errors, and performance, so they can optimize their MCP services.
ArmorCode releases AI Code Insights
This latest offering utilizes the company’s agent Anya, which has contextual understanding of a customer’s code repositories.
Its key benefits include the ability to better prioritize remediation efforts, surface hidden assets, manage change risks, understand code ownership, and have proactive AI exposure management.
“While development velocity has skyrocketed, security teams are often flying blind, buried in alerts without understanding the actual risk lurking within their code repositories,” said Mark Lambert, chief product officer at ArmorCode. “AI Code Insights changes that. We’re providing the crucial context – the ‘what, who and how’ – behind the code and vulnerability. This allows organizations to finally cut through the noise, prioritize effectively, and proactively secure their most critical assets before they become liabilities. It’s about making existing security investments work smarter, not just harder.”
Amplitude launches AI agents for product development
These new agents can assist with improving checkout conversion, feature adoption, and user onboarding. It can also look for signs that a user is ready to upgrade so that it can send them special offers.
With Amplitude’s AI Agents working around the clock, product development shifts from a slow, step-by-step process to a high-speed, multi-track system where strategy, analysis, and action can happen at the same time,” said Spenser Skates, CEO and co-founder of Amplitude. “This isn’t just about doing what you’ve always done, faster. It’s about doing what you wouldn’t, couldn’t, or didn’t know how to do before.”
OpenAI o3-pro available in the API
OpenAI has announced that o3-pro can now be accessed through the OpenAI API. Additionally, Pro and Team users can now use o3-pro in ChatGPT.
The pricing for using o3-pro in the API will be 87% cheaper than o1-pro, and the price of o3 is also being cut by 80%. “We optimized our inference stack that serves o3. Same exact model—just cheaper,” the company wrote in a post on X.
Zencoder launches end-to-end UI testing agent
Zencoder has announced a public beta for Zentester, its new end-to-end UI testing AI agent.
Zentester imitates how humans behave when interacting with web applications, such as navigating the layout, and identifying and using interactive elements. It does this by combining images (screenshots) with DOM (snapshot) information.
As it runs through test scenarios, it generates test artifacts that capture the actions performed and the expected visual and functional outcomes.
Databricks adds new tools like Lakebase, Lakeflow Designer, and Agent Bricks to better support building AI apps and agents in the enterprise
Lakebase is a managed Postgres database designed for running AI apps and agents. It adds an operational database layer to Databricks’ Data Intelligence Platform.
Coming soon as a preview, Lakeflow Designer is a no-code ETL capability for creating production data pipelines. It features a drag-and-drop UI and an AI assistant that allows users to describe what they want in natural language.
It is based on Lakeflow, the company’s solution for data engineers for building data pipelines. Lakeflow is now generally available, with new features such as Declarative Pipelines, a new IDE, new point-and-click ingestion connectors for Lakeflow Connect, and the ability to write directly to the lakehouse using Zerobus.
Agent Bricks is Databricks’ new tool for creating agents for enterprise use cases. Users can describe the task they want the agent to do, connect their enterprise data, and Agent Bricks handles the creation.
Trustwise launches trust layer for AI agents
Its new Harmony AI solution includes six shields that secure AI across different models, agents, and clouds. They include an MCP Shield, Prompt Shield (to prevent injection attacks and hallucinations), Compliance Shield, Brand Shield (for maintaining brand tone and persona), Cost Shield, and Carbon Shield (for reducing carbon footprint of AI).
“Developers aren’t just securing text anymore, they’re securing actions,” said Matthew Barker, head of AI research at Trustwise. “That demands real-time controls that help both developers and security teams monitor how agents think, decide, and act. Harmony AI acts as a runtime shield, enforcing security and control directly in the decision loop and preventing drift before agents go off course.”
OpenAI announces Codex updates
The coding agent Codex can now access the internet during task execution, opening up new capabilities such as the ability to install base dependencies, run tests that need external resources, and upgrade or install packages.
Internet access is turned off by default. It can be enabled when a new environment is created, or an environment can be edited to allow it. Users can control the domains and HTTP methods that Codex can use.
OpenAI also announced that Codex has begun rolling out to ChatGPT Plus users. The company did note that it might set rate limits for Plus users during high demand periods.
Mistral releases coding assistant
Mistral Code builds on the open-source project Continue, which provides a hub of models, rules, prompts, docs, and other building blocks for creating AI code assistants. It is powered by four different coding models: Codestral, Codestral Embed, Devstral, and Mistral Medium.
It is proficient in over 80 programming languages, and can reason over files, Git diffs, terminal output, and issues. It is currently available as a private beta for JetBrains IDEs and VSCode.
“Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring—through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped on-prem GPUs. Unlike typical SaaS copilots, all parts of the stack—from models to code—are delivered by one provider subject to a single set of SLAs, and every line of code resides inside the customer’s enterprise boundary,” the company wrote in its announcement.
Postman introduces Agent Mode to integrate the power of AI agents into Postman’s core capabilities
The agents can create, organize, and update collections; create test cases; generate documentation; build multi-step agents to automate repeatable API tasks; and setup monitoring and observability.
Abhinav Asthana, CEO and co-founder of Postman, told SD Times that it’s sort of like having an expert Postman user by your side.
The company also announced the ability for users to turn any public API on the Postman network into an MCP server. It also launched a network for MCP servers where publishers can host tools for agents and have them be easily discoverable by developers. “We basically took all the remote MCP servers available today, verified them, and put them on the public network,” said Abhinav Asthana, CEO and co-founder of Postman.
FinOps Foundation launches FinOps for AI certification
The training and certification is designed to “help FinOps practitioners understand, manage, and optimize AI-related cloud spend,” the foundation explained.
It will address topics such as AI-specific cost allocation, chargeback models, workload optimization, unit economics, and sustainability.
The lessons will be a four-part series, the first of which is now available, with the other parts launching in September 2025, November 2025, and January 2026. The certification exam will be available in March of next year.
Latest version of Microsoft’s Dev Proxy adds LLM usage and cost tracking
Dev Proxy 0.28 includes the OpenAITelemetryPlugin to provide visibility into how applications are interacting with OpenAI. For each request, it will provide information about the model used, the token count, cost estimation, and grouped summaries per model.
Dev Proxy can also now use the local AI runtime stack Foundry Local as its local language model provider.
Other updates in Dev Proxy 0.28 include new extensions for .NET Aspire, improved generating PATCH operations for TypeSpec, support for JSONC in mock files, and improved logging.
Snowflake introduces agentic AI innovations for data insights
Snowflake Intelligence (public preview soon) is powered by intelligent data agents and provides a natural language experience for asking questions that can result in the delivery of actionable insights from structured and unstructured data. Also in private preview soon is a new Data Science Agent to help data scientists automate routine ML model development tasks, according to the company’s announcement.
Snowflake Intelligence brings together data from numerous sources and uses the new Snowflake Openflow to compile information from spreadsheets, documents, images, and databases simultaneously. The data agents can generate visualizations and assist users in taking action on insights, Snowflake said in its announcement. Snowflake Intelligence can also access third-party knowledge through Cortex Knowledge Extensions, soon to be generally available on Snowflake Marketplace.
Progress adds new AI code assistants
Progress Software announced new AI code assistants and other capabilities built into the Q2 2025 release of Progress Telerik and Progress Kendo UI, .NET and JavaScript UI libraries for modern application development. This release introduces AI Coding Assistants for Blazor and React, AI-driven theme generation and GenAI-powered reporting insights, the company announced.
The AI Coding Assistants can automatically generate code in Telerik UI for Blazor and KendoReact libraries within many AI-powered IDEs, which reduces the time spent on manual edits and shortens development cycles. Further, developerws can make natural language prompts into Progress ThemeBuilder to create custom styles for Telerik and Kendo UI components, the company wrote in its announcement. The releases also include reporting summaries and insights powered by generative AI in Progress Telerik Reporting, as well as a GenAI-powered PDF processing library, which the company said can provide “instant document insights, AI prompt options in the Editor control and new AI building blocks and page templates to speed up UI development.”
IBM announces wastonx AI Labs
Watsonx AI Lab is an innovation hub based in New York City that will connect AI developers with IBM’s resources and expertise.
According to IBM, NYC was chosen for the location because it has over 2,000 AI startups. The company hopes to support those as well as pursue collaborations with local universities and research institutions.
“This isn’t your typical corporate lab. watsonx AI Labs is where the best AI developers gain access to world-class engineers and resources and build new businesses and applications that will reshape AI for the enterprise,” said Ritika Gunnar, general manager of data and AI at IBM. “By anchoring this mission in New York City, we are investing in a diverse, world‑class talent pool and a vibrant community whose innovations have long shaped the tech landscape.”
The post June 2025: All AI updates from the past month appeared first on SD Times.
Source: Read MoreÂ