Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 2, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 2, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 2, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 2, 2025

      How Red Hat just quietly, radically transformed enterprise server Linux

      June 2, 2025

      OpenAI wants ChatGPT to be your ‘super assistant’ – what that means

      June 2, 2025

      The best Linux VPNs of 2025: Expert tested and reviewed

      June 2, 2025

      One of my favorite gaming PCs is 60% off right now

      June 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      `document.currentScript` is more useful than I thought.

      June 2, 2025
      Recent

      `document.currentScript` is more useful than I thought.

      June 2, 2025

      Adobe Sensei and GenAI in Practice for Enterprise CMS

      June 2, 2025

      Over The Air Updates for React Native Apps

      June 2, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025
      Recent

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025

      Microsoft says Copilot can use location to change Outlook’s UI on Android

      June 2, 2025

      TempoMail — Command Line Temporary Email in Linux

      June 2, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Step Towards Best Practices for Open Datasets for LLM Training

    Step Towards Best Practices for Open Datasets for LLM Training

    January 20, 2025

    Large language models rely heavily on open datasets to train, which poses significant legal, technical, and ethical challenges in managing such datasets. There are uncertainties around the legal implications of using data based on varying copyright laws and changing regulations regarding safe usage. The lack of global standards or centralized databases to validate and license datasets and incomplete or inconsistent metadata makes it impossible to assess the legal status of works. Technical barriers also relate to access to digitized public domain material. Most open datasets are not governed and have not implemented any kind of legal safety net for their contributors, exposing them to dangers and making them impossible to scale up. While intended to create more transparency and collaborative work, they do little or nothing to engage broader social challenges such as diversity and accountability and often exclude underrepresented languages and viewpoints. 

    Current methods of building open datasets for LLMs often lack clear legal frameworks and face significant technical, operational, and ethical challenges. Traditional methods depend on incomplete metadata, complicating verifying copyright status and compliance across different regions with different laws. Digitization of public domain materials and making them accessible is challenging because big projects like Google Books restrict usage, which prevents the construction of open datasets. Volunteer-driven projects lack structured governance, which exposes the contributors to legal risks. Such gaps prevent equal access, prevent diversity in data representation, and concentrate power in a few dominant organizations. This creates an ecosystem where open datasets struggle to compete with proprietary models, reducing accountability and slowing progress toward transparent and inclusive AI development.

    To mitigate issues in metadata encoding, data sourcing, and processing for machine learning datasets, researchers proposed a framework focused on building a reliable corpus using openly licensed and public domain data for training large language models (LLMs). The framework emphasizes overcoming technical challenges like ensuring reliable metadata and digitizing physical records. It promotes cross-domain cooperation to responsibly curate, govern, and release these datasets while promoting competition in the LLM ecosystem. It also emphasizes metadata standards, reproducibility for accountability, and ensuring data source diversity as an alternative to more traditional methods lacking structured governance and transparency.

    Researchers included all the practical steps of sourcing, processing, and governing datasets. Tools for detecting openly licensed content were used to ensure high-quality data. The framework integrated standards for metadata consistency, emphasized digitization, and encouraged collaboration with communities to create datasets. It also supported transparency and reproducibility in preprocessing and addressed potential biases and harmful content in a robust and inclusive system for training LLMs while reducing legal risks. The framework also highlights engaging with underrepresented communities to build diverse datasets and create clearer, machine-readable terms of Use. Additionally, making the open data ecosystem sustainable should come through proposed funding models on public funding from both tech companies and cultural institutions to ensure sustainable participation.

    Finally, the researchers provided a clear scenario with a broadly outlined plan on how to approach the issues discussed within the context of training LLMs on non-licensed data, with a focus on the openness of the datasets and the efforts made by different spheres. Initiatives such as emphasizing metadata standardization, enhancing the digitization process, and responsible governance were intended to make the artificial intelligence ecosystem more open. The works build the foundation for future works where further probing into newer innovations in dataset management, AI governance, and advancements of the technologies that enhance the accessibility of data while addressing the problem of ethical and legal challenges.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 65k+ ML SubReddit.

    🚨 [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)

    The post Step Towards Best Practices for Open Datasets for LLM Training appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleReact Native 0.77 – New Styling Features, Android’s 16KB page support, Swift Template
    Next Article Hydrogen – lightweight Matrix client

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 2, 2025
    Machine Learning

    MiMo-VL-7B: A Powerful Vision-Language Model to Enhance General Visual Understanding and Multimodal Reasoning

    June 2, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    Mastering Stratego, the classic game of imperfect information

    Artificial Intelligence

    Alabama Education Department Breach Raises Concerns About Student Data Security

    Development

    Transcribe and generate subtitles for YouTube videos with Node.js

    Artificial Intelligence

    Why Businesses Are Switching to Webflow for Website Development

    Web Development

    Highlights

    Linux

    Rilasciata Tails 6.11: Aggiornamenti Critici e Nuove Funzionalità

    January 9, 2025

    La distribuzione GNU/Linux Tails ha recentemente annunciato il rilascio della versione Tails 6.11, un aggiornamento…

    Cohere Rerank 3 Nimble now generally available on Amazon SageMaker JumpStart

    August 19, 2024

    Connect to Amazon services using AWS PrivateLink in Amazon SageMaker

    June 20, 2024

    How ‘Chain of Thought’ Makes Transformers Smarter

    May 13, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.