Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Google Researchers Propose a Formal Boosting Machine Learning Algorithm for Any Loss Function Whose Set of Discontinuities has Zero Lebesgue Measure

    Google Researchers Propose a Formal Boosting Machine Learning Algorithm for Any Loss Function Whose Set of Discontinuities has Zero Lebesgue Measure

    July 8, 2024

    As a very effective machine learning ML-born optimization setting, boosting requires one to efficiently learn arbitrarily good models using a weak learner oracle, which provides classifiers that perform marginally better than random guessing. Although the original boosting model did not necessitate first-order loss information, the decades-long history of boosting has rapidly transformed it into a first-order optimization setting, with some even incorrectly defining it as such. This is a significant difference with gradient-based optimization.

    The term “zeroth order optimization” can describe a group of optimization methods that skip over using gradient information to determine a function’s minimum and maximum values. These techniques shine in cases where the function is either noisy or non-differentiable or where computing the gradient would be prohibitively expensive or impractical. In contrast, the search for the best solution in zeroth order optimization is guided entirely by function evaluations.

    There have been few investigations into boosting, even though ML has witnessed a significant uptick in zeroth order optimization across numerous settings and algorithms in recent years. The question is highly pertinent, as boosting has rapidly developed into a method that necessitates first-order knowledge of the optimal loss. Boosting lowered to this first-order setting is also rather typical. A weak learner that could provide classifiers that were distinct from random guessing was originally required by the boosting model rather than first-order loss information. With zeroth-order optimization becoming more popular in machine learning ML, it’s important to know if differentiability is necessary for boosting, which loss functions can be boosted with a weak learner, and how boosting compares to the recent formal progress on bringing gradient descent to zeroth-order optimization.

    Google’s research team aims to provide a formal boosting technique to handle loss functions with sets of discontinuities with zero Lebesgue measure. Any stored loss function would, in reality, satisfy this criterion with conventional floating-point encoding. Theoretically, the researchers include losses that are not necessarily convex, differentiable, Lipschitz, or continuous. Classical zeroth-order optimization solutions differ significantly in this regard; while their algorithms are zeroth-order, the assumptions made about the loss in their proof of convergence—including convexity, differentiability (once or twice), Lipschitzness, and so on—are far more extensive. They employ or expand upon strategies from quantum calculusℎ, some of which seem to be commonplace in zeroth-order optimization research, to sidestep the usage of derivatives in boosting.

    The proposed SECBOOST technique, when applied to a broader context, uncovers two additional areas where deliberate design decisions can be leveraged to maintain assumptions throughout a stronger number of rounds. This not only addresses the issue of local minima but also manages losses that exhibit stable values over portions of their area. The potential of the SECBOOST technique is significant, offering hope for the future of boosting research and application.

    Based on the findings, boosting is better than the latest advancements in zeroth-order optimization. This is because, to achieve boosting-compliant convergence, the loss was only assumed to satisfy some of the typical assumptions used in such analyses. While this issue requires fixing in this situation—for example, to optimize the offset oracle efficiently—recent developments in zeroth-order optimization have also accomplished significant design tricks for implementing such algorithms. The team hasn’t resolved this issue yet. Still, in the appendix, the community can find some mock experiments that a simple implementation can accomplish, suggesting that SecBoost can optimize “exotic” types of losses. 

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. 

    Join our Telegram Channel and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 46k+ ML SubReddit

    The post Google Researchers Propose a Formal Boosting Machine Learning Algorithm for Any Loss Function Whose Set of Discontinuities has Zero Lebesgue Measure appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleResearchers at the University College London Unravel the Universal Dynamics of Representation Learning in Deep Neural Networks
    Next Article Researchers at IT University of Copenhagen Propose Self-Organizing Neural Networks for Enhanced Adaptability

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-2305 – Apache Linux Path Traversal Vulnerability

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    12 Best Free and Open Source Food and Drink Software

    Linux

    Google’s AI-powered NotebookLM just got a big upgrade – how to try it (and why you should)

    Development

    ELEGNT: Expressive and Functional Movement Design for Non-Anthropomorphic Robot

    Machine Learning

    Mesh Gradient Generator

    Web Development

    Highlights

    Reimagining the Semantic Web: UCL’s Innovative Synthesis of AI and Web Science

    July 31, 2024

    Post Content Source: Read More 

    Summarize meetings with LLMs in 5 lines of Python code

    February 21, 2025

    How to properly validate an ETL process?

    November 20, 2024

    Gaining the Edge: How to Leverage Blockchain for a Competitive Advantage 🚀🔗

    April 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.