Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Designing Better UX For Left-Handed People

      July 25, 2025

      This week in AI dev tools: Gemini 2.5 Flash-Lite, GitLab Duo Agent Platform beta, and more (July 25, 2025)

      July 25, 2025

      Tenable updates Vulnerability Priority Rating scoring method to flag fewer vulnerabilities as critical

      July 24, 2025

      Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

      July 24, 2025

      Trump’s AI plan says a lot about open source – but here’s what it leaves out

      July 25, 2025

      Google’s new Search mode puts classic results back on top – how to access it

      July 25, 2025

      These AR swim goggles I tested have all the relevant metrics (and no subscription)

      July 25, 2025

      Google’s new AI tool Opal turns prompts into apps, no coding required

      July 25, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Laravel Scoped Route Binding for Nested Resource Management

      July 25, 2025
      Recent

      Laravel Scoped Route Binding for Nested Resource Management

      July 25, 2025

      Add Reactions Functionality to Your App With Laravel Reactions

      July 25, 2025

      saasykit/laravel-open-graphy

      July 25, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Sam Altman won’t trust ChatGPT with his “medical fate” unless a doctor is involved — “Maybe I’m a dinosaur here”

      July 25, 2025
      Recent

      Sam Altman won’t trust ChatGPT with his “medical fate” unless a doctor is involved — “Maybe I’m a dinosaur here”

      July 25, 2025

      “It deleted our production database without permission”: Bill Gates called it — coding is too complex to replace software engineers with AI

      July 25, 2025

      Top 6 new features and changes coming to Windows 11 in August 2025 — from AI agents to redesigned BSOD screens

      July 25, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Meta AI Released the Perception Language Model (PLM): An Open and Reproducible Vision-Language Model to Tackle Challenging Visual Recognition Tasks

    Meta AI Released the Perception Language Model (PLM): An Open and Reproducible Vision-Language Model to Tackle Challenging Visual Recognition Tasks

    April 18, 2025

    Despite rapid advances in vision-language modeling, much of the progress in this field has been shaped by models trained on proprietary datasets, often relying on distillation from closed-source systems. This reliance creates barriers to scientific transparency and reproducibility, particularly for tasks involving fine-grained image and video understanding. Benchmark performance may reflect the training data and black-box model capabilities more than architectural or methodological improvements, making it difficult to assess true research progress.

    To address these limitations, Meta AI has introduced the Perception Language Model (PLM), a fully open and reproducible framework for vision-language modeling. PLM is designed to support both image and video inputs and is trained without the use of proprietary model outputs. Instead, it draws from large-scale synthetic data and newly collected human-labeled datasets, enabling a detailed evaluation of model behavior and training dynamics under transparent conditions.

    The PLM framework integrates a vision encoder (Perception Encoder) with LLaMA 3 language decoders of varying sizes—1B, 3B, and 8B parameters. It employs a multi-stage training pipeline: initial warm-up with low-resolution synthetic images, large-scale midtraining on diverse synthetic datasets, and supervised fine-tuning using high-resolution data with precise annotations. This pipeline emphasizes training stability and scalability while maintaining control over data provenance and content.

    A key contribution of the work is the release of two large-scale, high-quality video datasets addressing existing gaps in temporal and spatial understanding. The PLM–FGQA dataset comprises 2.4 million question-answer pairs capturing fine-grained details of human actions—such as object manipulation, movement direction, and spatial relations—across diverse video domains. Complementing this is PLM–STC, a dataset of 476,000 spatio-temporal captions linked to segmentation masks that track subjects across time, allowing models to reason about “what,” “where,” and “when” in complex video scenes.

    Technically, PLM employs a modular architecture that supports high-resolution image tiling (up to 36 tiles) and multi-frame video input (up to 32 frames). A 2-layer MLP projector connects the visual encoder to the LLM, and both synthetic and human-labeled data are structured to support a wide range of tasks including captioning, visual question answering, and dense region-based reasoning. The synthetic data engine, built entirely using open-source models, generates ~64.7 million samples across natural images, charts, documents, and videos—ensuring diversity while avoiding reliance on proprietary sources.

    Meta AI also introduces PLM–VideoBench, a new benchmark designed to evaluate aspects of video understanding not captured by existing benchmarks. It includes tasks such as fine-grained activity recognition (FGQA), smart-glasses video QA (SGQA), region-based dense captioning (RDCap), and spatio-temporal localization (RTLoc). These tasks require models to engage in temporally grounded and spatially explicit reasoning.

    Empirical evaluations show that PLM models, particularly at the 8B parameter scale, perform competitively across 40+ image and video benchmarks. In video captioning, PLM achieves gains of +39.8 CIDEr on average over open baselines. On PLM–VideoBench, the 8B variant closes the gap with human performance in structured tasks such as FGQA and shows improved results in spatio-temporal localization and dense captioning. Notably, all results are obtained without distillation from closed models, underscoring the feasibility of open, transparent VLM development.

    In summary, PLM offers a methodologically rigorous and fully open framework for training and evaluating vision-language models. Its release includes not just models and code, but also the largest curated dataset for fine-grained video understanding and a benchmark suite that targets previously underexplored capabilities. PLM is positioned to serve as a foundation for reproducible research in multimodal AI and a resource for future work on detailed visual reasoning in open settings.


    Here is the Paper, Model and Code. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Meta AI Released the Perception Language Model (PLM): An Open and Reproducible Vision-Language Model to Tackle Challenging Visual Recognition Tasks appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous Articlefastmod – fast partial replacement for codemod
    Next Article An In-Depth Guide to Firecrawl Playground: Exploring Scrape, Crawl, Map, and Extract Features for Smarter Web Data Extraction

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 25, 2025
    Machine Learning

    Unsupervised System 2 Thinking: The Next Leap in Machine Learning with Energy-Based Transformers

    July 25, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Can E Ink tablets really reduce stress? This study and my experience say yes

    News & Updates

    CVE-2025-54454 – Samsung Electronics MagicINFO 9 Server Hard-coded Credentials Authentication Bypass Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    KL-001-2025-011: Schneider Electric EcoStruxure IT Data Center Expert Unauthenticated Server-Side Request Forgery

    Security

    Common Accessibility Issues: Real Bugs from Real Testing

    Development

    Highlights

    CVE-2024-6107 – Due to insufficient verification, an attacker coul

    July 21, 2025

    CVE ID : CVE-2024-6107

    Published : July 21, 2025, 9:15 a.m. | 14 hours, 59 minutes ago

    Description : Due to insufficient verification, an attacker could use a malicious client to bypass authentication checks and run RPC commands in a region. This has been addressed in MAAS and updated in the corresponding snaps.

    Severity: 9.6 | CRITICAL

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-4013 – PHPGurukul Art Gallery Management System SQL Injection Vulnerability

    April 28, 2025

    CVE-2025-46742 – Oracle WebLogic Server Authentication Bypass

    May 12, 2025

    CVE-2025-30421 – NI Circuit Design Suite Stack-Based Buffer Overflow Vulnerability

    May 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.