Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Representative Line: Brace Yourself

      September 18, 2025

      Beyond the Pilot: A Playbook for Enterprise-Scale Agentic AI

      September 18, 2025

      GitHub launches MCP Registry to provide central location for trusted servers

      September 18, 2025

      MongoDB brings Search and Vector Search to self-managed versions of database

      September 18, 2025

      Distribution Release: Security Onion 2.4.180

      September 18, 2025

      Distribution Release: Omarchy 3.0.1

      September 17, 2025

      Distribution Release: Mauna Linux 25

      September 16, 2025

      Distribution Release: SparkyLinux 2025.09

      September 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025
      Recent

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025

      Shopping Portal using Python Django & MySQL

      September 17, 2025

      Perficient Earns Adobe’s Real-time CDP Specialization

      September 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025
      Recent

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025

      Review: Elecrow’s All-in-one Starter Kit for Pico 2

      September 18, 2025

      FOSS Weekly #25.38: GNOME 49 Release, KDE Drama, sudo vs sudo-rs, Local AI on Android and More Linux Stuff

      September 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems

    Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems

    May 2, 2025

    As large language model (LLM) agents gain traction across enterprise and research ecosystems, a foundational gap has emerged: communication. While agents today can autonomously reason, plan, and act, their ability to coordinate with other agents or interface with external tools remains constrained by the absence of standardized protocols. This communication bottleneck not only fragments the agent landscape but also limits scalability, interoperability, and the emergence of collaborative AI systems.

    A recent survey by researchers at Shanghai Jiao Tong University and ANP Community offers the first comprehensive taxonomy and evaluation of protocols for AI agents. The work introduces a principled classification scheme, explores existing protocol frameworks, and outlines future directions for scalable, secure, and intelligent agent ecosystems.

    The Communication Problem in Modern AI Agents

    The deployment of LLM agents has outpaced the development of mechanisms that enable them to interact with each other or with external resources. In practice, most agent interactions rely on ad hoc APIs or brittle function-calling paradigms—approaches that lack generalizability, security guarantees, and cross-vendor compatibility.

    The issue is analogous to the early days of the Internet, where the absence of common transport and application-layer protocols prevented seamless information exchange. Just as TCP/IP and HTTP catalyzed global connectivity, standard protocols for AI agents are poised to serve as the backbone of a future “Internet of Agents.”

    A Framework for Agent Protocols: Context vs. Collaboration

    The authors propose a two-dimensional classification system that delineates agent protocols along two axes:

    1. Context-Oriented vs. Inter-Agent Protocols
      • Context-Oriented Protocols govern how agents interact with external data, tools, or APIs.
      • Inter-Agent Protocols enable peer-to-peer communication, task delegation, and coordination across multiple agents.
    2. General-Purpose vs. Domain-Specific Protocols
      • General-purpose protocols are designed to operate across diverse environments and agent types.
      • Domain-specific protocols are optimized for particular applications such as human-agent dialogue, robotics, or IoT systems.

    This classification helps clarify the design trade-offs across flexibility, performance, and specialization.

    Key Protocols and Their Design Principles

    1. Model Context Protocol (MCP) – Anthropic

    MCP is a general-purpose context-oriented protocol that facilitates structured interaction between LLM agents and external resources. Its architecture decouples reasoning (host agents) from execution (clients and servers), enhancing security and scalability. Notably, MCP mitigates privacy risks by ensuring that sensitive user data is processed locally, rather than embedded directly into LLM-generated function calls.

    2. Agent-to-Agent Protocol (A2A) – Google

    Designed for secure and asynchronous collaboration, A2A enables agents to exchange tasks and artifacts in enterprise settings. It emphasizes modularity, multimodal support (e.g., files, streams), and opaque execution, preserving IP while enabling interoperability. The protocol defines standardized entities such as Agent Cards, Tasks, and Artifacts for robust workflow orchestration.

    3. Agent Network Protocol (ANP) – Open-Source

    ANP envisions a decentralized, web-scale agent network. Built atop decentralized identity (DID) and semantic meta-protocol layers, ANP facilitates trustless, encrypted communication between agents across heterogeneous domains. It introduces layered abstractions for discovery, negotiation, and task execution—positioning itself as a foundation for an open “Internet of Agents.”

    Performance Metrics: A Holistic Evaluation Framework

    To assess protocol robustness, the survey introduces a comprehensive framework based on seven evaluation criteria:

    • Efficiency – Throughput, latency, and resource utilization (e.g., token cost in LLMs)
    • Scalability – Support for increasing agents, dense communication, and dynamic task allocation
    • Security – Fine-grained authentication, access control, and context desensitization
    • Reliability – Robust message delivery, flow control, and connection persistence
    • Extensibility – Ability to evolve without breaking compatibility
    • Operability – Ease of deployment, observability, and platform-agnostic implementation
    • Interoperability – Cross-system compatibility across languages, platforms, and vendors

    This framework reflects both classical network protocol principles and agent-specific challenges such as semantic coordination and multi-turn workflows.

    Toward Emergent Collective Intelligence

    One of the most compelling arguments for protocol standardization lies in the potential for collective intelligence. By aligning communication strategies and capabilities, agents can form dynamic coalitions to solve complex tasks—akin to swarm robotics or modular cognitive systems. Protocols such as Agora take this further by enabling agents to negotiate and adapt new protocols in real time, using LLM-generated routines and structured documents.

    Similarly, protocols like LOKA embed ethical reasoning and identity management into the communication layer, ensuring that agent ecosystems can evolve responsibly, transparently, and securely.

    The Road Ahead: From Static Interfaces to Adaptive Protocols

    Looking forward, the authors outline three stages in protocol evolution:

    • Short-Term: Transition from rigid function calls to dynamic, evolvable protocols.
    • Mid-Term: Shift from rule-based APIs to agent ecosystems capable of self-organization and negotiation.
    • Long-Term: Emergence of layered infrastructures that support privacy-preserving, collaborative, and intelligent agent networks.

    These trends signal a departure from traditional software design toward a more flexible, agent-native computing paradigm.

    Conclusion

    The future of AI will not be shaped solely by model architecture or training data—it will be shaped by how agents communicate, coordinate, and learn from one another. Protocols are not merely technical specifications; they are the connective tissue of intelligent systems. By formalizing these communication layers, we unlock the possibility of a decentralized, secure, and interoperable network of agents—an architecture capable of scaling far beyond the capabilities of any single model or framework.


    Check out the model on Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleBuilding a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory
    Next Article Top Factors to Consider When Choosing the Right AI Service Provider

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Chrome Security Update: Patch for 11 Vulnerabilities Enabling Malicious Code Execution

    Security

    How to Create Google Account (Step-by-Step Guide)

    Operating Systems

    Transform JSON into Typed Collections with Laravel’s AsCollection::of()

    Development

    Microsoft Previews SQL Server 2025 with Ubuntu 24.04 and TLS 1.3

    Operating Systems

    Highlights

    How to configure JMeter to dynamically read data from one of multiple CSV files based on load distribution?

    April 21, 2025

    I’m working on performance testing a multi-tenant application using Apache JMeter. I want to simulate load coming from three different clients, where each client’s data is stored in a separate CSV file. The load should be distributed like this:

    Client 1: 60%
    Client 2: 30%
    Client 3: 10%

    All CSV files have the same structure (columns), but contain different data per client.
    My Goal:
    I want each thread to randomly and proportionally pick data from the appropriate CSV file based on the percentages above and use it in the HTTP requests without data overlap or inconsistency.
    What I Tried:
    Approach 1: Dynamically set file path using a variable
    My Jmeter Test Plan structure is,
    Test Plan
    |– User Defined Variables
    |– CSV Data Set Config
    |– Stepping Thread Group
    |– |– JSR223 PreProcessor
    |– |– HTTP Request Sampler 1
    |– |– HTTP Request Sampler 2
    |– |– HTTP Request Sampler n
    |– View Result Tree
    |– Summary Report

    In the Test Plan, I have a variable path defined in User Defined Variables as:
    path = D:/jmeter/project

    I then set the Filename in CSV Data Set Config to ${csvFile}.
    Inside a JSR223 PreProcessor, I tried setting the csvFile variable like this:
    def randomValue = Math.random()
    if (randomValue < 0.6) {
    vars.put(‘csvFile’, “${path}/file1.csv”)
    } else if (randomValue < 0.9) {
    vars.put(‘csvFile’, “${path}/file2.csv”)
    } else {
    vars.put(‘csvFile’, “${path}/file3.csv”)
    }

    The issue is, even though csvFile gets set correctly in the JSR223 PreProcessor, the CSV Data Set Config doesn’t pick up the value dynamically.
    Approach 2: Dynamically set file path using a variable and place the CSV Data Set Config after the JSR223 PreProcessor
    My Jmeter Test Plan structure is,
    Test Plan
    |– User Defined Variables
    |– Stepping Thread Group
    |– |– JSR223 PreProcessor
    |– |– CSV Data Set Config
    |– |– HTTP Request Sampler 1
    |– |– HTTP Request Sampler 2
    |– |– HTTP Request Sampler n
    |– View Result Tree
    |– Summary Report

    Still the result is the same as in Approach 1.
    I suspect it’s due to the execution order, as JMeter processes the CSV Data Set Config before the PreProcessor runs.
    My Question:
    What is the correct way in JMeter to:

    Dynamically and proportionally distribute threads across multiple CSV files
    Ensure clean separation of data per thread (no variable conflicts)
    Avoid data overlap or race conditions between threads

    Note: I cannot share actual screenshots or project files due to employer restrictions, but I’m looking for a JMeter-safe and scalable way to simulate this kind of weighted load across clients using separate CSV files or anything other suggestion for tackling this issue.
    Any ideas or recommendations for managing this effectively?

    WWE 2K25 is free to play on Xbox and PC over SummerSlam weekend — here are the game modes you should try first

    August 1, 2025

    Fn Lock Key: How to Lock & Unlock Fn Key in Windows 10 & 11

    July 26, 2025

    CVE-2025-4558 – WormHole Tech GPM Unauthenticated Password Change Vulnerability

    May 12, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.