Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 31, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 31, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 31, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 31, 2025

      How to install SteamOS on ROG Ally and Legion Go Windows gaming handhelds

      May 31, 2025

      Xbox Game Pass just had its strongest content quarter ever, but can we expect this level of quality forever?

      May 31, 2025

      Gaming on a dual-screen laptop? I tried it with Lenovo’s new Yoga Book 9i for 2025 — Here’s what happened

      May 31, 2025

      We got Markdown in Notepad before GTA VI

      May 31, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Oracle Fusion new Product Management Landing Page and AI (25B)

      May 31, 2025
      Recent

      Oracle Fusion new Product Management Landing Page and AI (25B)

      May 31, 2025

      Filament Is Now Running Natively on Mobile

      May 31, 2025

      How Remix is shaking things up

      May 30, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      How to install SteamOS on ROG Ally and Legion Go Windows gaming handhelds

      May 31, 2025
      Recent

      How to install SteamOS on ROG Ally and Legion Go Windows gaming handhelds

      May 31, 2025

      Xbox Game Pass just had its strongest content quarter ever, but can we expect this level of quality forever?

      May 31, 2025

      Gaming on a dual-screen laptop? I tried it with Lenovo’s new Yoga Book 9i for 2025 — Here’s what happened

      May 31, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Researchers from Meta AI and UT Austin Explored Scaling in Auto-Encoders and Introduced ViTok: A ViT-Style Auto-Encoder to Perform Exploration

    Researchers from Meta AI and UT Austin Explored Scaling in Auto-Encoders and Introduced ViTok: A ViT-Style Auto-Encoder to Perform Exploration

    January 18, 2025

    Modern image and video generation methods rely heavily on tokenization to encode high-dimensional data into compact latent representations. While advancements in scaling generator models have been substantial, tokenizers—primarily based on convolutional neural networks (CNNs)—have received comparatively less attention. This raises questions about how scaling tokenizers might improve reconstruction accuracy and generative tasks. Challenges include architectural limitations and constrained datasets, which affect scalability and broader applicability. There is also a need to understand how design choices in auto-encoders influence performance metrics such as fidelity, compression, and generation.

    Researchers from Meta and UT Austin have addressed these issues by introducing ViTok, a Vision Transformer (ViT)-based auto-encoder. Unlike traditional CNN-based tokenizers, ViTok employs a Transformer-based architecture enhanced by the Llama framework. This design supports large-scale tokenization for images and videos, overcoming dataset constraints by training on extensive and diverse data.

    ViTok focuses on three aspects of scaling:

    1. Bottleneck scaling: Examining the relationship between latent code size and performance.
    2. Encoder scaling: Evaluating the impact of increasing encoder complexity.
    3. Decoder scaling: Assessing how larger decoders influence reconstruction and generation.

    These efforts aim to optimize visual tokenization for both images and videos by addressing inefficiencies in existing architectures.

    Technical Details and Advantages of ViTok

    ViTok uses an asymmetric auto-encoder framework with several distinctive features:

    1. Patch and Tubelet Embedding: Inputs are divided into patches (for images) or tubelets (for videos) to capture spatial and spatiotemporal details.
    2. Latent Bottleneck: The size of the latent space, defined by the number of floating points (E), determines the balance between compression and reconstruction quality.
    3. Encoder and Decoder Design: ViTok employs a lightweight encoder for efficiency and a more computationally intensive decoder for robust reconstruction.

    By leveraging Vision Transformers, ViTok improves scalability. Its enhanced decoder incorporates perceptual and adversarial losses to produce high-quality outputs. Together, these components enable ViTok to:

    • Achieve effective reconstruction with fewer computational FLOPs.
    • Handle image and video data efficiently, taking advantage of the redundancy in video sequences.
    • Balance trade-offs between fidelity (e.g., PSNR, SSIM) and perceptual quality (e.g., FID, IS).

    Results and Insights

    ViTok’s performance was evaluated using benchmarks such as ImageNet-1K, COCO for images, and UCF-101 for videos. Key findings include:

    Hostinger
    • Bottleneck Scaling: Increasing bottleneck size improves reconstruction but can complicate generative tasks if the latent space is too large.
    • Encoder Scaling: Larger encoders show minimal benefits for reconstruction and may hinder generative performance due to increased decoding complexity.
    • Decoder Scaling: Larger decoders enhance reconstruction quality, but their benefits for generative tasks vary. A balanced design is often required.

    Results highlight ViTok’s strengths in efficiency and accuracy:

    • State-of-the-art metrics for image reconstruction at 256p and 512p resolutions.
    • Improved video reconstruction scores, demonstrating adaptability to spatiotemporal data.
    • Competitive generative performance in class-conditional tasks with reduced computational demands.

    Conclusion

    ViTok offers a scalable, Transformer-based alternative to traditional CNN tokenizers, addressing key challenges in bottleneck design, encoder scaling, and decoder optimization. Its robust performance across reconstruction and generation tasks highlights its potential for a wide range of applications. By effectively handling both image and video data, ViTok underscores the importance of thoughtful architectural design in advancing visual tokenization.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 65k+ ML SubReddit.

    🚨 Recommend Open-Source Platform: Parlant is a framework that transforms how AI agents make decisions in customer-facing scenarios. (Promoted)

    The post Researchers from Meta AI and UT Austin Explored Scaling in Auto-Encoders and Introduced ViTok: A ViT-Style Auto-Encoder to Perform Exploration appeared first on MarkTechPost.

    Source: Read More 

    Hostinger
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleSalesforce AI Research Proposes PerfCodeGen: A Training-Free Framework that Enhances the Performance of LLM-Generated Code with Execution Feedback
    Next Article CrewAI: A Guide to Agentic AI Collaboration and Workflow Optimization with Code Implementation

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    May 31, 2025
    Machine Learning

    Multimodal Foundation Models Fall Short on Physical Reasoning: PHYX Benchmark Highlights Key Limitations in Visual and Symbolic Integration

    May 31, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Meta resumes AI training using EU user data

    Artificial Intelligence

    How IDIADA optimized its intelligent chatbot with Amazon Bedrock

    Machine Learning

    Google Lens adds a cool search trick to iPhones – how to try it

    News & Updates

    Ghibli AI Image Generator

    Development
    GetResponse

    Highlights

    Machine Learning

    Bridging Reasoning and Action: The Synergy of Large Concept Models (LCMs) and Large Action Models (LAMs) in Agentic Systems

    January 23, 2025

    The advent of advanced AI models has led to innovations in how machines process information,…

    CVE-2025-5299 – SourceCodester Client Database Management System Unrestricted File Upload Vulnerability

    May 28, 2025

    Meet FluidML: A Generic Runtime Memory Management and Optimization Framework for Faster, Smarter Machine Learning Inference

    November 20, 2024

    CVE-2025-4805 – WatchGuard Fireware OS Stored XSS Vulnerability

    May 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.