Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Value-Driven AI Roadmap

      September 9, 2025

      This week in AI updates: Mistral’s new Le Chat features, ChatGPT updates, and more (September 5, 2025)

      September 6, 2025

      Designing For TV: Principles, Patterns And Practical Guidance (Part 2)

      September 5, 2025

      Neo4j introduces new graph architecture that allows operational and analytics workloads to be run together

      September 5, 2025

      Lenovo Legion Go 2 specs unveiled: The handheld gaming device to watch this October

      September 10, 2025

      As Windows 10 support ends, users weigh costly extended security program against upgrading to Windows 11

      September 10, 2025

      Lenovo’s Legion Glasses 2 update could change handheld gaming

      September 10, 2025

      Is Lenovo’s refreshed LOQ tower enough to compete? New OLED monitors raise the stakes at IFA 2025

      September 10, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      External Forces Reshaping Financial Services in 2025 and Beyond

      September 10, 2025
      Recent

      External Forces Reshaping Financial Services in 2025 and Beyond

      September 10, 2025

      Why It’s Time to Move from SharePoint On-Premises to SharePoint Online

      September 10, 2025

      Apple’s Big Move: The Future of Mobile

      September 10, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Lenovo Legion Go 2 specs unveiled: The handheld gaming device to watch this October

      September 10, 2025
      Recent

      Lenovo Legion Go 2 specs unveiled: The handheld gaming device to watch this October

      September 10, 2025

      As Windows 10 support ends, users weigh costly extended security program against upgrading to Windows 11

      September 10, 2025

      Lenovo’s Legion Glasses 2 update could change handheld gaming

      September 10, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Sequential-NIAH: A Benchmark for Evaluating LLMs in Extracting Sequential Information from Long Texts

    Sequential-NIAH: A Benchmark for Evaluating LLMs in Extracting Sequential Information from Long Texts

    April 24, 2025

    Evaluating how well LLMs handle long contexts is essential, especially for retrieving specific, relevant information embedded in lengthy inputs. Many recent LLMs—such as Gemini-1.5, GPT-4, Claude-3.5, Qwen-2.5, and others—have pushed the boundaries of context length while striving to maintain strong reasoning abilities. To assess such capabilities, benchmarks like ∞Bench, LongBench, and L-Eval have been developed. However, these often overlook the “Needle-in-a-Haystack” (NIAH) task, which challenges models to retrieve a few critical pieces of information from predominantly irrelevant content. Earlier benchmarks, such as RULER and Counting-Stars, offered synthetic and simplistic NIAH setups, utilizing items like passwords or symbols. NeedleBench improved this by including more realistic, semantically meaningful needles and logical reasoning questions. Yet, it still lacks tasks involving the retrieval and correct ordering of sequential information, such as timestamps or procedural steps.

    Efforts to enhance LLMs’ long-context capabilities have employed methods like RoPE, ALiBi, and memory-based techniques, along with architectural changes seen in models like Mamba and FLASHBUTTERFLY. Modern LLMs now support extensive contexts—Gemini 1.5 and Kimi can process up to 1–2 million tokens. NIAH benchmarks assess how effectively models can extract relevant data from vast amounts of text, and NeedleBench further incorporates logical relationships to simulate real-world scenarios. Regarding evaluation, natural language generation (NLG) performance is typically assessed using metrics derived from LLMs, prompt-based evaluations, fine-tuned models, or human-LLM collaborations. While prompting alone often underperforms, fine-tuning and human-in-the-loop methods can greatly enhance evaluation accuracy and reliability.

    Researchers from Tencent YouTu Lab have introduced Sequential-NIAH, a benchmark designed to assess how well LLMs retrieve sequential information, referred to as a needle, from long texts. The benchmark includes synthetic, real, and open-domain QA needles embedded in contexts ranging from 8K to 128K tokens, totaling 14,000 samples. A synthetic data-trained evaluation model achieved 99.49% accuracy in judging the correctness and order of responses. However, tests on six popular LLMs showed the highest performance at just 63.15%, highlighting the difficulty of the task and the need for further advancement in long-context comprehension.

    The Sequential-NIAH benchmark is designed to evaluate models on retrieving sequentially ordered information (needles) from long texts (haystacks). It uses three types of QA synthesis pipelines: synthetic (generated events in order), real (extracted from temporal knowledge graphs), and open-domain QA (logically ordered answers). These QA pairs are inserted into diverse, long texts sourced from the LongData Corpus, covering various domains. To construct samples, the long text is segmented, needles are randomly shuffled and embedded, and the task is framed using prompt templates. The final dataset comprises 14,000 samples, split across training, development, and test sets, in both English and Chinese.

    The evaluation model was tested against Claude-3.5, GPT-4o, and others on 1,960 samples, achieving a 99.49% accuracy. This outperforms GPT-4o (96.07%) and Claude-3.5 (87.09%) by significant margins. In subsequent benchmark tests on 2,000 samples, Gemini-1.5 outperformed other models with an accuracy of 63.15%, while GPT-4o-mini and GPT-4o performed poorly. Performance varied with text length, number of needles, QA synthesis pipelines, and languages, with Gemini-1.5 maintaining stable results. A noise analysis revealed that minor perturbations had a negligible impact on accuracy, but larger shifts in needle positions reduced model consistency, particularly for Qwen-2.5 and LLaMA-3.3.

    In conclusion, the Sequential-NIAH benchmark assesses LLMs on their ability to extract sequential information from lengthy texts (up to 128,000 tokens). It includes synthetic, real, and open-domain question-answering pipelines, with 14,000 samples for training, development, and testing. Despite testing popular models like Claude, GPT-4.0, Gemini, LLaMA, and Qwen, none achieved high accuracy, with the best performing at 63.15%. A synthetic evaluation model achieved an accuracy of 99.49% on the test data. The benchmark also highlights the challenges of increasing context lengths and needle counts and is validated through noise robustness tests, making it valuable for advancing LLM research.


    Check out the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Sequential-NIAH: A Benchmark for Evaluating LLMs in Extracting Sequential Information from Long Texts appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Coding Guide to Asynchronous Web Data Extraction Using Crawl4AI: An Open-Source Web Crawling and Scraping Toolkit Designed for LLM Workflows
    Next Article How Small Businesses Can Leverage React Native for Big Growth📈

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-3951 – WordPress WP-Optimize SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Best early Prime Day deals 2025: 30+ sales on tech products live now

    News & Updates

    Security Tools Alone Don’t Protect You — Control Effectiveness Does

    Development

    CVE-2025-48267 – ThimPress WP Pipes Path Traversal Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-7916 – Simopro Technology WinMatrix3 Insecure Deserialization Vulnerability

    July 21, 2025

    CVE ID : CVE-2025-7916

    Published : July 21, 2025, 6:15 a.m. | 17 hours, 59 minutes ago

    Description : WinMatrix3 developed by Simopro Technology has an Insecure Deserialization vulnerability, allowing unauthenticated remote attackers to execute arbitrary code on the server by sending maliciously crafted serialized contents.

    Severity: 9.8 | CRITICAL

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    LWiAI Podcast #220 – Gemini 2.5 Flash Image, Claude for Chrome

    September 3, 2025

    CVE-2025-3478 – OpenText Enterprise Security Manager Stored XSS Vulnerability

    August 25, 2025

    The Intersection of Agile and Accessibility – Writing Inclusive User Stories and Acceptance Criteria

    July 25, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.