Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Together AI Introduces Mixture of Agents (MoA): An AI Framework that Leverages the Collective Strengths of Multiple LLMs to Improve State-of-the-Art Quality

    Together AI Introduces Mixture of Agents (MoA): An AI Framework that Leverages the Collective Strengths of Multiple LLMs to Improve State-of-the-Art Quality

    June 19, 2024

    In a significant leap forward for AI, Together AI has introduced an innovative Mixture of Agents (MoA) approach, Together MoA. This new model harnesses the collective strengths of multiple large language models (LLMs) to enhance state-of-the-art quality and performance, setting new benchmarks in AI. 

    MoA employs a layered architecture, with each layer comprising several LLM agents. These agents utilize outputs from the previous layer as auxiliary information to generate refined responses. This method allows MoA to integrate diverse capabilities and insights from various models, resulting in a more robust and versatile combined model. The implementation has proven successful, achieving a remarkable score of 65.1% on the AlpacaEval 2.0 benchmark, surpassing the previous leader, GPT-4o, which scored 57.5%.

    Image Source

    A critical insight driving the development of MoA is the concept of “collaborativeness” among LLMs. This phenomenon suggests that an LLM tends to generate better responses when presented with outputs from other models, even if those models are less capable. By leveraging this insight, MoA’s architecture categorizes models into “proposers” and “aggregators.” Proposers generate initial reference responses, offering nuanced and diverse perspectives, while aggregators synthesize these responses into high-quality outputs. This iterative process continues through several layers until a comprehensive and refined response is achieved.

    The Together MoA framework has been rigorously tested on multiple benchmarks, including AlpacaEval 2.0, MT-Bench, and FLASK. The results are impressive, with Together MoA achieving top positions on the AlpacaEval 2.0 and MT-Bench leaderboards. Notably, on AlpacaEval 2.0, Together MoA achieved a 7.6% absolute improvement margin from 57.5% (GPT-4o) to 65.1% using only open-source models. This demonstrates the model’s superior performance compared to closed-source alternatives.

    Image Source

    In addition to its technical success, Together MoA is designed with cost-effectiveness in mind. By analyzing the cost-performance trade-offs, the research indicates that the Together MoA configuration provides the best balance, offering high-quality results at a reasonable cost. This is particularly evident in the Together MoA-Lite configuration, which, despite having fewer layers, matches GPT-4o in cost while achieving superior quality.

    MoA’s success is attributed to the collaborative efforts of several organizations in the open-source AI community, including Meta AI, Mistral AI, Microsoft, Alibaba Cloud, and DataBricks. Their contributions to developing models like Meta Llama 3, Mixtral, WizardLM, Qwen, and DBRX have been instrumental in this achievement. Additionally, benchmarks like AlpacaEval, MT-Bench, and FLASK, developed by Tatsu Labs, LMSYS, and KAIST AI, played a crucial role in evaluating MoA’s performance.

    Looking ahead, Together AI plans to further optimize the MoA architecture by exploring various model choices, prompts, and configurations. One key area of focus will be reducing the latency of the time to the first token, which is an exciting future direction for this research. They aim to enhance MoA’s capabilities in reasoning-focused tasks, further solidifying its position as a leader in AI innovation.

    Image Source

    In conclusion, Together MoA represents a significant advancement in leveraging the collective intelligence of open-source models. Its layered approach and collaborative ethos exemplify the potential for enhancing AI systems, making them more capable, robust, and aligned with human reasoning. The AI community eagerly anticipates this groundbreaking technology’s continued evolution and application.

    Check out the Paper, GitHub, and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. 

    Join our Telegram Channel and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 44k+ ML SubReddit

    The post Together AI Introduces Mixture of Agents (MoA): An AI Framework that Leverages the Collective Strengths of Multiple LLMs to Improve State-of-the-Art Quality appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleMaximize your Amazon Translate architecture using strategic caching layers
    Next Article Bitrix24 Supernova Release: Igniting Exponential Growth with Increased Efficiency and Productivity

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4610 – WordPress WP-Members Membership Plugin Stored Cross-Site Scripting Vulnerability

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    HP’s most popular laptops break free from Snapdragon exclusivity with AMD and Intel’s latest AI chips

    News & Updates

    CVE-2025-4468 – SourceCodester Online Student Clearance System File Upload Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Building SaaS Website #09: Mastering Schemas and Validation

    Development

    Perficient Insights: Dreamforce 2024 with Ryan Ambrose

    Development
    GetResponse

    Highlights

    Microsoft reduces Nvidia’ GB200 orders as result of production delays

    December 2, 2024

    Microsoft has reportedly reduced its orders of Nvidia’s GB200 amid production delays as the Redmond…

    Designing DX Keynote

    December 1, 2024

    How to delete the data from Database using Laravel 11

    February 12, 2025

    New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems

    April 29, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.