Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 1, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 1, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 1, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 1, 2025

      My top 5 must-play PC games for the second half of 2025 — Will they live up to the hype?

      June 1, 2025

      A week of hell with my Windows 11 PC really makes me appreciate the simplicity of Google’s Chromebook laptops

      June 1, 2025

      Elden Ring Nightreign Night Aspect: How to beat Heolstor the Nightlord, the final boss

      June 1, 2025

      New Xbox games launching this week, from June 2 through June 8 — Zenless Zone Zero finally comes to Xbox

      June 1, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Student Record Android App using SQLite

      June 1, 2025
      Recent

      Student Record Android App using SQLite

      June 1, 2025

      When Array uses less memory than Uint8Array (in V8)

      June 1, 2025

      Laravel 12 Starter Kits: Definite Guide Which to Choose

      June 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      My top 5 must-play PC games for the second half of 2025 — Will they live up to the hype?

      June 1, 2025
      Recent

      My top 5 must-play PC games for the second half of 2025 — Will they live up to the hype?

      June 1, 2025

      A week of hell with my Windows 11 PC really makes me appreciate the simplicity of Google’s Chromebook laptops

      June 1, 2025

      Elden Ring Nightreign Night Aspect: How to beat Heolstor the Nightlord, the final boss

      June 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length

    Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length

    January 11, 2025

    Time-series forecasting plays a crucial role in various domains, including finance, healthcare, and climate science. However, achieving accurate predictions remains a significant challenge. Traditional methods like ARIMA and exponential smoothing often struggle to generalize across domains or handle the complexities of high-dimensional data. Contemporary deep learning approaches, while promising, frequently require large labeled datasets and substantial computational resources, making them inaccessible to many organizations. Additionally, these models often lack the flexibility to handle varying time granularities and forecast horizons, further limiting their applicability.

    Google AI has just released TimesFM-2.0, a new foundation model for time-series forecasting, now available on Hugging Face in both JAX and PyTorch implementations. This release brings improvements in accuracy and extends the maximum context length, offering a robust and versatile solution for forecasting challenges. TimesFM-2.0 builds on its predecessor by integrating architectural enhancements and leveraging a diverse training corpus, ensuring strong performance across a range of datasets.

    The model’s open availability on Hugging Face underscores Google AI’s effort to support collaboration within the AI community. Researchers and developers can readily fine-tune or deploy TimesFM-2.0, facilitating advancements in time-series forecasting practices.

    Technical Innovations and Benefits

    TimesFM-2.0 incorporates several advancements that enhance its forecasting capabilities. Its decoder-only architecture is designed to accommodate varying history lengths, prediction horizons, and time granularities. Techniques like input patching and patch masking enable efficient training and inference, while also supporting zero-shot forecasting—a rare feature among forecasting models.

    One of its key features is the ability to predict longer horizons by generating larger output patches, reducing the computational overhead of autoregressive decoding. The model is trained on a rich dataset comprising real-world data from sources such as Google Trends and Wikimedia pageviews, as well as synthetic datasets. This diverse training data equips the model to recognize a broad spectrum of temporal patterns. Pretraining on over 100 billion time points enables TimesFM-2.0 to deliver performance comparable to state-of-the-art supervised models, often without the need for task-specific fine-tuning.

    With 200 million parameters, the model balances computational efficiency and forecasting accuracy, making it practical for deployment in various scenarios.

    Results and Insights

    Empirical evaluations highlight the model’s strong performance. In zero-shot settings, TimesFM-2.0 consistently performs well compared to traditional and deep learning baselines across diverse datasets. For example, on the Monash archive—a collection of 30 datasets covering various granularities and domains—TimesFM-2.0 achieved superior results in terms of scaled mean absolute error (MAE), outperforming models like N-BEATS and DeepAR.

    On the Darts benchmarks, which include univariate datasets with complex seasonal patterns, TimesFM-2.0 delivered competitive results, often matching the top-performing methods. Similarly, evaluations on Informer datasets, such as electricity transformer temperature datasets, demonstrated the model’s effectiveness in handling long horizons (e.g., 96 and 192 steps).

    Hostinger

    TimesFM-2.0 tops the GIFT-Eval leaderboard on point and probabilistic forecasting accuracy metrics.

    Ablation studies underscored the impact of specific design choices. Increasing the output patch length, for instance, reduced the number of autoregressive steps, improving efficiency without sacrificing accuracy. The inclusion of synthetic data proved valuable in addressing underrepresented granularities, such as quarterly and yearly datasets, further enhancing the model’s robustness.

    Conclusion

    Google AI’s release of TimesFM-2.0 represents a thoughtful advance in time-series forecasting. By combining scalability, accuracy, and adaptability, the model addresses common forecasting challenges with a practical and efficient solution. Its open-source availability invites the research community to explore its potential, fostering further innovation in this domain. Whether used for financial modeling, climate predictions, or healthcare analytics, TimesFM-2.0 equips organizations to make informed decisions with confidence and precision.


    Check out the Paper and Model on Hugging Face. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

    🚨 FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation Intelligence–Join this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.

    The post Google AI Just Released TimesFM-2.0 (JAX and Pytorch) on Hugging Face with a Significant Boost in Accuracy and Maximum Context Length appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAccessibility Testing with Playwright: Expert Guide
    Next Article Good Fire AI Open-Sources Sparse Autoencoders (SAEs) for Llama 3.1 8B and Llama 3.3 70B

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 1, 2025
    Machine Learning

    Enigmata’s Multi-Stage and Mix-Training Reinforcement Learning Recipe Drives Breakthrough Performance in LLM Puzzle Reasoning

    June 1, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    AI Statistics Everyone Should Know in 2024

    Development

    Implement a rollback strategy after an Amazon Aurora MySQL blue/green deployment switchover

    Databases

    WireLurker: new malware targets Apple users

    Development

    How to Create WebVTT Files for Videos in Node.js

    Artificial Intelligence
    GetResponse

    Highlights

    Coder announces integration with DX to unlock insights into developer productivity

    March 16, 2025

    Coder, provider of an open-source, self-hosted cloud development environment, has announced it is teaming up…

    BRAG Released: High-Performance SLMs (Small Language Models) Specifically Trained for RAG Tasks Under $25 Each

    August 5, 2024

    Radio Station Slammed for Pretending AI Host Is a Real Person

    April 25, 2025

    This simple (and actually useful) AI tool is my favorite new Pixel feature (it’s in iOS 18, too)

    August 13, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.