Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      15 Essential Skills to Look for When Hiring Node.js Developers for Enterprise Projects (2025-2026)

      August 4, 2025

      African training program creates developers with cloud-native skills

      August 4, 2025

      React.js for SaaS Platforms: How Top Development Teams Help Startups Launch Faster

      August 3, 2025

      Upwork Freelancers vs Dedicated React.js Teams: What’s Better for Your Project in 2025?

      August 1, 2025

      LastPass can now warn or block logins to shadow SaaS apps – here’s how

      August 4, 2025

      Get up to a year of Adobe Creative Cloud access for 40% off

      August 4, 2025

      Got 6 hours? This free AI training from Google and Goodwill can boost your resume today

      August 4, 2025

      Why I recommend this budget phone with a paper-like screen over ‘minimalist’ devices

      August 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel Boost, your AI coding starter kit

      August 4, 2025
      Recent

      Laravel Boost, your AI coding starter kit

      August 4, 2025

      Using GitHub Copilot in VS Code

      August 4, 2025

      Optimizely Mission Control – Part I

      August 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Top 20 kubectl Commands Every Kubernetes Beginner Must Know

      August 4, 2025
      Recent

      Top 20 kubectl Commands Every Kubernetes Beginner Must Know

      August 4, 2025

      Microsoft’s record stock run collides with Nadella’s admission that 15,000 layoffs still ‘hurt’

      August 4, 2025

      Microsoft and Adobe Power Up Fantasy Premier League Fans with AI – Here’s How

      August 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities

    HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities

    June 19, 2025

    AI institutions develop heterogeneous models for specific tasks but face data scarcity challenges during training. Traditional Federated Learning (FL) supports only homogeneous model collaboration, which needs identical architectures across all clients. However, clients develop model architectures for their unique requirements. Moreover, sharing effort-intensive locally trained models contains intellectual property and reduces participants’ interest in engaging in collaborations. Heterogeneous Federated Learning (HtFL) addresses these limitations, but the literature lacks a unified benchmark for evaluating HtFL across various domains and aspects.

    Background and Categories of HtFL Methods

    Existing FL benchmarks focus on data heterogeneity using homogeneous client models but neglect real scenarios that involve model heterogeneity. Representative HtFL methods fall into three main categories addressing these limitations. Partial parameter sharing methods such as LG-FedAvg, FedGen, and FedGH maintain heterogeneous feature extractors while assuming homogeneous classifier heads for knowledge transfer. Mutual distillation, such as FML, FedKD, and FedMRL, trains and shares small auxiliary models through distillation techniques. Prototype sharing methods transfer lightweight class-wise prototypes as global knowledge, collecting local prototypes from clients, and collecting them on servers to guide local training. However, it remains unclear whether existing HtFL methods perform consistently across diverse scenarios.

    Introducing HtFLlib: A Unified Benchmark

    Researchers from Shanghai Jiao Tong University, Beihang University, Chongqing University, Tongji University, Hong Kong Polytechnic University, and The Queen’s University of Belfast have proposed the first Heterogeneous Federated Learning Library (HtFLlib), an easy and extensible method for integrating multiple datasets and model heterogeneity scenarios. This method integrates:

    • 12 datasets across various domains, modalities, and data heterogeneity scenarios
    • 40 model architectures ranging from small to large, across three modalities. 
    • A modularized and easy-to-extend HtFL codebase with implementations of 10 representative HtFL methods.
    • Systematic evaluations covering accuracy, convergence, computation costs, and communication costs. 

    Datasets and Modalities in HtFLlib

    HtFLlib contains detailed data heterogeneity scenarios divided into three settings: Label Skew with Pathological and Dirichlet as subsettings, Feature Shift, and Real-World. It integrates 12 datasets, including Cifar10, Cifar100, Flowers102, Tiny-ImageNet, KVASIR, COVIDx, DomainNet, Camelyon17, AG News, Shakespeare, HAR, and PAMAP2. These datasets vary significantly in domain, data volume, and class numbers, demonstrating HtFLlib’s comprehensive and versatile nature. Moreover, researchers’ main focus is on image data, especially the label skew setting, as image tasks are the most commonly used tasks across various fields. The HtFL methods are evaluated across image, text, and sensor signal tasks to evaluate their respective strengths and weaknesses.

    Performance Analysis: Image Modality

    For image data, most HtFL methods show decreased accuracy as model heterogeneity increases. The FedMRL shows superior strength through its combination of auxiliary global and local models. When introducing heterogeneous classifiers that make partial parameter sharing methods inapplicable, FedTGP maintains superiority across diverse settings due to its adaptive prototype refinement ability. Medical dataset experiments with black-boxed pre-trained heterogeneous models demonstrate that HtFL enhances model quality compared to pre-trained models and achieves greater improvements than auxiliary models, such as FML. For text data, FedMRL’s advantages in label skew settings diminish in real-world settings, while FedProto and FedTGP perform relatively poorly compared to image tasks.

    Conclusion

    In conclusion, researchers introduced HtFLlib, a framework that addresses the critical gap in HtFL benchmarking by providing unified evaluation standards across diverse domains and scenarios. HtFLlib’s modular design and extensible architecture provide a detailed benchmark for both research and practical applications in HtFL. Moreover, its ability to support heterogeneous models in collaborative learning opens the way for future research into utilizing complex pre-trained large models, black-box systems, and varied architectures across different tasks and modalities.


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning
    Next Article 8 Venture Firms in Cybersecurity Making Big Moves in 2025

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 4, 2025
    Machine Learning

    Ambisonics Super-Resolution Using A Waveform-Domain Neural Network

    August 4, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How to set up remote desktop access on your Linux computers

    News & Updates

    This AI Paper Introduces PARSCALE (Parallel Scaling): A Parallel Computation Method for Efficient and Scalable Language Model Deployment

    Machine Learning

    Activision once again draws the ire of players for new Diablo Immortal marketing that appears to have been made with generative AI

    News & Updates

    CVE-2025-48923 – Drupal Toc.Js Cross-Site Scripting (XSS)

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2024-13451 – Bit Form Contact Form Sensitive Information Exposure

    July 2, 2025

    CVE ID : CVE-2024-13451

    Published : July 2, 2025, 6:15 a.m. | 3 hours, 27 minutes ago

    Description : The Contact Form by Bit Form: Multi Step Form, Calculation Contact Form, Payment Contact Form & Custom Contact Form builder plugin for WordPress is vulnerable to Sensitive Information Exposure in all versions up to, and including, 2.17.4 via file uploads due to insufficient directory listing prevention and lack of randomization of file names. This makes it possible for unauthenticated attackers to extract sensitive data including files uploaded via a form. The vulnerability was partially patched in version 2.17.5.

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2024-22351 – IBM InfoSphere Information Server Authentication Session Impersonation

    April 23, 2025

    CVE-2025-43245 – Apple macOS Downgrade Code Signing Vulnerability

    July 30, 2025

    The broken rhetoric of AI

    July 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.