Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 3, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 3, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 3, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 3, 2025

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025

      These solid-state fans will revolutionize cooling in our PCs and laptops

      June 3, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025
      Recent

      Community News: Latest PECL Releases (06.03.2025)

      June 3, 2025

      A Comprehensive Guide to Azure Firewall

      June 3, 2025

      Test Job Failures Precisely with Laravel’s assertFailedWith Method

      June 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025
      Recent

      All the WWE 2K25 locker codes that are currently active

      June 3, 2025

      PSA: You don’t need to spend $400+ to upgrade your Xbox Series X|S storage

      June 3, 2025

      UK civil servants saved 24 minutes per day using Microsoft Copilot, saving two weeks each per year according to a new report

      June 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length

    Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length

    January 11, 2025

    Time-series forecasting plays a crucial role in various domains, including finance, healthcare, and climate science. However, achieving accurate predictions remains a significant challenge. Traditional methods like ARIMA and exponential smoothing often struggle to generalize across domains or handle the complexities of high-dimensional data. Contemporary deep learning approaches, while promising, frequently require large labeled datasets and substantial computational resources, making them inaccessible to many organizations. Additionally, these models often lack the flexibility to handle varying time granularities and forecast horizons, further limiting their applicability.

    Google AI has just released TimesFM-2.0, a new foundation model for time-series forecasting, now available on Hugging Face in both JAX and PyTorch implementations. This release brings improvements in accuracy and extends the maximum context length, offering a robust and versatile solution for forecasting challenges. TimesFM-2.0 builds on its predecessor by integrating architectural enhancements and leveraging a diverse training corpus, ensuring strong performance across a range of datasets.

    The model’s open availability on Hugging Face underscores Google AI’s effort to support collaboration within the AI community. Researchers and developers can readily fine-tune or deploy TimesFM-2.0, facilitating advancements in time-series forecasting practices.

    Technical Innovations and Benefits

    TimesFM-2.0 incorporates several advancements that enhance its forecasting capabilities. Its decoder-only architecture is designed to accommodate varying history lengths, prediction horizons, and time granularities. Techniques like input patching and patch masking enable efficient training and inference, while also supporting zero-shot forecasting—a rare feature among forecasting models.

    One of its key features is the ability to predict longer horizons by generating larger output patches, reducing the computational overhead of autoregressive decoding. The model is trained on a rich dataset comprising real-world data from sources such as Google Trends and Wikimedia pageviews, as well as synthetic datasets. This diverse training data equips the model to recognize a broad spectrum of temporal patterns. Pretraining on over 100 billion time points enables TimesFM-2.0 to deliver performance comparable to state-of-the-art supervised models, often without the need for task-specific fine-tuning.

    With 200 million parameters, the model balances computational efficiency and forecasting accuracy, making it practical for deployment in various scenarios.

    Results and Insights

    Empirical evaluations highlight the model’s strong performance. In zero-shot settings, TimesFM-2.0 consistently performs well compared to traditional and deep learning baselines across diverse datasets. For example, on the Monash archive—a collection of 30 datasets covering various granularities and domains—TimesFM-2.0 achieved superior results in terms of scaled mean absolute error (MAE), outperforming models like N-BEATS and DeepAR.

    On the Darts benchmarks, which include univariate datasets with complex seasonal patterns, TimesFM-2.0 delivered competitive results, often matching the top-performing methods. Similarly, evaluations on Informer datasets, such as electricity transformer temperature datasets, demonstrated the model’s effectiveness in handling long horizons (e.g., 96 and 192 steps).

    TimesFM-2.0 tops the GIFT-Eval leaderboard on point and probabilistic forecasting accuracy metrics.

    Ablation studies underscored the impact of specific design choices. Increasing the output patch length, for instance, reduced the number of autoregressive steps, improving efficiency without sacrificing accuracy. The inclusion of synthetic data proved valuable in addressing underrepresented granularities, such as quarterly and yearly datasets, further enhancing the model’s robustness.

    Conclusion

    Google AI’s release of TimesFM-2.0 represents a thoughtful advance in time-series forecasting. By combining scalability, accuracy, and adaptability, the model addresses common forecasting challenges with a practical and efficient solution. Its open-source availability invites the research community to explore its potential, fostering further innovation in this domain. Whether used for financial modeling, climate predictions, or healthcare analytics, TimesFM-2.0 equips organizations to make informed decisions with confidence and precision.


    Check out the Paper and Model on Hugging Face. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

    🚨 FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation Intelligence–Join this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.

    The post Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAccessibility Testing with Playwright: Expert Guide
    Next Article Good Fire AI Open-Sources Sparse Autoencoders (SAEs) for Llama 3.1 8B and Llama 3.3 70B

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 3, 2025
    Machine Learning

    This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal Reasoning

    June 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Leveraging AlphaFold and AI for Rapid Discovery of Targeted Treatments for Liver Cancer

    Development

    ReasonFlux: Elevating LLM Reasoning with Hierarchical Template Scaling

    Machine Learning

    Total.js UI Builder: Creating a Form

    Development

    Protecting Tomorrow’s World: Shaping the Cyber-Physical Future

    Development

    Highlights

    Accelerating Growth: How Exela Optimized Recruitment for a Leading UK Student Accommodation Service | Exela HR Solutions

    November 5, 2024

    Post Content Source: Read More 

    BSD Release: GhostBSD 25.01

    March 1, 2025

    20 Best Figma UI Kits for Designers in 2025

    January 2, 2025

    Universal Design in Pharmacies – Screen Reader Compatibility

    February 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.