Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Newest LF Decentralized Trust Lab HOPrS identifies if photos have been altered

      July 9, 2025

      Coder reimagines development environments to make them more ideal for AI agents

      July 9, 2025

      Report: AI coding productivity gains cancelled out by other friction points that slow developers down

      July 9, 2025

      15 Proven Benefits of Outsourcing Node.js Development for Large Organizations

      July 9, 2025

      Cor, blimey! The ASUS ROG Ally drops to its lowest-ever price for Amazon Prime Day in the UK — the only Windows handheld to permanently replace my Steam Deck

      July 9, 2025

      Owlcat Games talks to us about about WH40K: Rogue Trader, the next game ‘Dark Heresy’ — and how the studio feels about working with Xbox Game Pass

      July 9, 2025

      Microsoft says ‘we have threads at home’ — rolls out feature Slack has had for years

      July 9, 2025

      Subnautica 2 publisher Krafton reportedly delayed it to 2026 — did it just kill a $250 million bonus for devs in the crib?

      July 9, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Cally – Small, feature-rich calendar components

      July 9, 2025
      Recent

      Cally – Small, feature-rich calendar components

      July 9, 2025

      Working with the Command Line and WP-CLI

      July 9, 2025

      Access to Care Is Evolving: What Consumer Insights and Behavior Models Reveal

      July 9, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Cor, blimey! The ASUS ROG Ally drops to its lowest-ever price for Amazon Prime Day in the UK — the only Windows handheld to permanently replace my Steam Deck

      July 9, 2025
      Recent

      Cor, blimey! The ASUS ROG Ally drops to its lowest-ever price for Amazon Prime Day in the UK — the only Windows handheld to permanently replace my Steam Deck

      July 9, 2025

      Owlcat Games talks to us about about WH40K: Rogue Trader, the next game ‘Dark Heresy’ — and how the studio feels about working with Xbox Game Pass

      July 9, 2025

      Microsoft says ‘we have threads at home’ — rolls out feature Slack has had for years

      July 9, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen

    A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen

    May 24, 2025

    In this tutorial, we demonstrated how Microsoft’s AutoGen framework empowers developers to orchestrate complex, multi-agent workflows with minimal code. By leveraging AutoGen’s RoundRobinGroupChat and TeamTool abstractions, you can seamlessly assemble specialist assistants, such as Researchers, FactCheckers, Critics, Summarizers, and Editors, into a cohesive “DeepDive” tool. AutoGen handles the intricacies of turn‐taking, termination conditions, and streaming output, allowing you to focus on defining each agent’s expertise and system prompts rather than plumbing together callbacks or manual prompt chains. Whether conducting in‐depth research, validating facts, refining prose, or integrating third‐party tools, AutoGen provides a unified API that scales from simple two‐agent pipelines to elaborate, five‐agent collaboratives.

    Copy CodeCopiedUse a different Browser
    !pip install -q autogen-agentchat[gemini] autogen-ext[openai] nest_asyncio

    We install the AutoGen AgentChat package with Gemini support, the OpenAI extension for API compatibility, and the nest_asyncio library to patch the notebook’s event loop, ensuring you have all the components needed to run asynchronous, multi-agent workflows in Colab.

    Copy CodeCopiedUse a different Browser
    import os, nest_asyncio
    from getpass import getpass
    
    
    nest_asyncio.apply()
    os.environ["GEMINI_API_KEY"] = getpass("Enter your Gemini API key: ")

    We import and apply nest_asyncio to enable nested event loops in notebook environments, then securely prompt for your Gemini API key using getpass and store it in os.environ for authenticated model client access.

    Copy CodeCopiedUse a different Browser
    from autogen_ext.models.openai import OpenAIChatCompletionClient
    
    
    model_client = OpenAIChatCompletionClient(
        model="gemini-1.5-flash-8b",    
        api_key=os.environ["GEMINI_API_KEY"],
        api_type="google",
    )
    

    We initialize an OpenAI‐compatible chat client pointed at Google’s Gemini by specifying the gemini-1.5-flash-8b model, injecting your stored Gemini API key, and setting api_type=”google”, giving you a ready-to-use model_client for downstream AutoGen agents.

    Copy CodeCopiedUse a different Browser
    from autogen_agentchat.agents import AssistantAgent
    
    
    researcher   = AssistantAgent(name="Researcher", system_message="Gather and summarize factual info.", model_client=model_client)
    factchecker  = AssistantAgent(name="FactChecker", system_message="Verify facts and cite sources.",       model_client=model_client)
    critic       = AssistantAgent(name="Critic",    system_message="Critique clarity and logic.",         model_client=model_client)
    summarizer   = AssistantAgent(name="Summarizer",system_message="Condense into a brief executive summary.", model_client=model_client)
    editor       = AssistantAgent(name="Editor",    system_message="Polish language and signal APPROVED when done.", model_client=model_client)

    We define five specialized assistant agents, Researcher, FactChecker, Critic, Summarizer, and Editor, each initialized with a role-specific system message and the shared Gemini-powered model client, enabling them to gather information, respectively, verify accuracy, critique content, condense summaries, and polish language within the AutoGen workflow.

    Copy CodeCopiedUse a different Browser
    from autogen_agentchat.teams import RoundRobinGroupChat
    from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
    
    
    max_msgs = MaxMessageTermination(max_messages=20)
    text_term = TextMentionTermination(text="APPROVED", sources=["Editor"])
    termination = max_msgs | text_term                                    
    team = RoundRobinGroupChat(
        participants=[researcher, factchecker, critic, summarizer, editor],
        termination_condition=termination
    )
    

    We import the RoundRobinGroupChat class along with two termination conditions, then compose a stop rule that fires after 20 total messages or when the Editor agent mentions “APPROVED.” Finally, it instantiates a round-robin team of the five specialized agents with that combined termination logic, enabling them to cycle through research, fact-checking, critique, summarization, and editing until one of the stop conditions is met.

    Copy CodeCopiedUse a different Browser
    from autogen_agentchat.tools import TeamTool
    
    
    deepdive_tool = TeamTool(team=team, name="DeepDive", description="Collaborative multi-agent deep dive")

    WE wrap our RoundRobinGroupChat team in a TeamTool named “DeepDive” with a human-readable description, effectively packaging the entire multi-agent workflow into a single callable tool that other agents can invoke seamlessly.

    Copy CodeCopiedUse a different Browser
    host = AssistantAgent(
        name="Host",
        model_client=model_client,
        tools=[deepdive_tool],
        system_message="You have access to a DeepDive tool for in-depth research."
    )
    

    We create a “Host” assistant agent configured with the shared Gemini-powered model_client, grant it the DeepDive team tool for orchestrating in-depth research, and prime it with a system message that informs it of its ability to invoke the multi-agent DeepDive workflow.

    Copy CodeCopiedUse a different Browser
    import asyncio
    
    
    async def run_deepdive(topic: str):
        result = await host.run(task=f"Deep dive on: {topic}")
        print("🔍 DeepDive result:n", result)
        await model_client.close()
    
    
    topic = "Impacts of Model Context Protocl on Agentic AI"
    loop = asyncio.get_event_loop()
    loop.run_until_complete(run_deepdive(topic))
    

    Finally, we define an asynchronous run_deepdive function that tells the Host agent to execute the DeepDive team tool on a given topic, prints the comprehensive result, and then closes the model client; it then grabs Colab’s existing asyncio loop and runs the coroutine to completion for a seamless, synchronous execution.

    In conclusion, integrating Google Gemini via AutoGen’s OpenAI‐compatible client and wrapping our multi‐agent team as a callable TeamTool gives us a powerful template for building highly modular and reusable workflows. AutoGen abstracts away event loop management (with nest_asyncio), streaming responses, and termination logic, enabling us to iterate quickly on agent roles and overall orchestration. This advanced pattern streamlines the development of collaborative AI systems and lays the foundation for extending into retrieval pipelines, dynamic selectors, or conditional execution strategies.


    Check out the Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOptimizing Assembly Code with LLMs: Reinforcement Learning Outperforms Traditional Compilers
    Next Article This AI Paper Introduces Group Think: A Token-Level Multi-Agent Reasoning Paradigm for Faster and Collaborative LLM Inference

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 9, 2025
    Machine Learning

    Scale generative AI use cases, Part 1: Multi-tenant hub and spoke architecture using AWS Transit Gateway

    July 9, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Building interactive agents in video game worlds

    Artificial Intelligence

    CVE-2025-3867 – WordPress Ajax Comment Form CST CSRF

    Common Vulnerabilities and Exposures (CVEs)

    Microsoft Edge may be deprecating “Mobile Wallet” on Android that was never available

    Operating Systems

    “Hey, why do I need Excel?”: Microsoft CEO Satya Nadella foresees a disruptive Agentic AI era that could “aggressively” collapse ‘Software as a Service’ apps

    News & Updates

    Highlights

    Linux

    Turntable is a Universal Scrobbler App for Linux

    April 28, 2025

    Those looking for an easy way to “scrobble” music on Linux, no matter which music…

    CVE-2025-49575 – Citizen is a MediaWiki skin that makes extensions

    June 12, 2025

    Marks & Spencer ransomware attack was good news for other retailers

    June 24, 2025

    CloudBees Unify, YugabyteDB adds support for DocumentDB, and more — SD Times Daily Digest

    May 20, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.