Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Google DeepMind Presents MoNE: A Novel Computer Vision Framework for the Adaptive Processing of Visual Tokens by Dynamically Allocating Computational Resources to Different Tokens

    Google DeepMind Presents MoNE: A Novel Computer Vision Framework for the Adaptive Processing of Visual Tokens by Dynamically Allocating Computational Resources to Different Tokens

    August 1, 2024

    One of the significant challenges in AI research is the computational inefficiency in processing visual tokens in Vision Transformer (ViT) and Video Vision Transformer (ViViT) models. These models process all tokens with equal emphasis, overlooking the inherent redundancy in visual data, which results in high computational costs. Addressing this challenge is crucial for the deployment of AI models in real-world applications where computational resources are limited and real-time processing is essential.

    Current methods like ViTs and Mixture of Experts (MoEs) models have been effective in processing large-scale visual data but come with significant limitations. ViTs treat all tokens equally, leading to unnecessary computations. MoEs improve scalability by conditionally activating parts of the network, thus maintaining inference-time costs. However, they introduce a larger parameter footprint and do not reduce computational costs without skipping tokens entirely. Additionally, these models often use experts with uniform computational capacities, limiting their ability to dynamically allocate resources based on token importance.

    A team of researchers from Google DeepMind and the University of Washington propose the Mixture of Nested Experts (MoNE) framework, which leverages a nested structure for experts to address the inefficiencies of existing methods. MoNE dynamically allocates computational resources by routing tokens to different nested experts based on their importance. This approach allows redundant tokens to be processed through smaller, cheaper models while more important tokens are routed to larger, more detailed models. The novelty lies in using a nested architecture that maintains the same parameter count as the baseline models but achieves a two-fold reduction in inference time compute. This adaptive processing not only enhances efficiency but also retains performance across different computational budgets.

    MoNE integrates a nested architecture within Vision Transformers, where experts with varying computational capacities are arranged hierarchically. Each token is dynamically routed to an appropriate expert using the Expert Preferred Routing (EPR) algorithm. The model processes tokens through partial in- and out-projections in the Self-Attention (SA) and MLP layers, facilitating efficient computation. The framework is validated on datasets such as ImageNet-21K, Kinetics400, and Something-Something-v2. The routing decisions are made based on the importance of tokens, which is determined by the router network’s probability distribution. MoNE’s effectiveness is demonstrated through rigorous experiments showing strong performance across different inference-time compute budgets.

    The proposed method achieves significant improvements in computational efficiency and performance across various datasets. On the ImageNet-21K dataset, MoNE achieves an accuracy of 87.5%, which is a substantial improvement over the baseline models. In video classification tasks, such as those involving the Kinetics400 and Something-Something-v2 datasets, MoNE demonstrates a two- to three-fold reduction in computational costs while maintaining or exceeding the accuracy of traditional methods. The adaptive processing capabilities of MoNE enable it to maintain robust performance even under constrained computational budgets, showcasing its effectiveness in both image and video data processing.

    In conclusion, The Mixture of Nested Experts (MoNE) framework offers a significant advancement in processing visual tokens efficiently. By dynamically allocating computational resources based on token importance, MoNE overcomes the limitations of existing ViT and MoE models, achieving substantial reductions in computational costs without sacrificing performance. This innovation holds great potential for enhancing real-world applications of AI, making high-performance models more accessible and practical. The contributions are validated through rigorous experiments, demonstrating MoNE’s adaptability and robustness across different datasets and computational budgets.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

    Don’t Forget to join our 47k+ ML SubReddit

    Find Upcoming AI Webinars here

    The post Google DeepMind Presents MoNE: A Novel Computer Vision Framework for the Adaptive Processing of Visual Tokens by Dynamically Allocating Computational Resources to Different Tokens appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous Articledeepc: A Germany-based Radiology AI Startup that has Developed ​​the Leading AI Operating System for Radiologists
    Next Article weights2weights: A Subspace in Diffusion Weights that Behaves as an Interpretable Latent Space over Customized Diffusion Models

    Related Posts

    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 17, 2025
    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-40906 – MongoDB BSON Serialization BSON::XS Multiple Vulnerabilities

    May 17, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    Supervision by Roboflow Enhances Computer Vision Projects: Installation, Features, and Community Support Guide

    Development

    Atomfall’s Radio Towers confused the daylights out of me, but then I stumbled upon a key character and all became clear

    News & Updates

    Gemini’s new extension brings back your favorite Google Assistant features on Android

    Development

    Operation Endgame – Largest Ever Operation Against Multiple Botnets Used to Deliver Ransomware

    Development

    Highlights

    CVE-2024-55913 – IBM Concert Software Directory Traversal Vulnerability

    May 2, 2025

    CVE ID : CVE-2024-55913

    Published : May 2, 2025, 1:15 a.m. | 2 hours, 11 minutes ago

    Description : IBM Concert Software 1.0.0 through 1.0.5 could allow a remote attacker to traverse directories on the system. An attacker could send a specially crafted URL request containing “dot dot” sequences (/../) to view arbitrary files on the system.

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    OpenAI strikes security deal with US government, eyes $100 billion valuation

    August 30, 2024

    Does your smartphone mysteriously wake up? 5 reasons why (that aren’t ghosts)

    August 29, 2024

    Elden Ring DLC: What level should you be for Shadow of the Erdtree?

    June 19, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.