Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How AI further empowers value stream management

      June 27, 2025

      12 Top ReactJS Development Companies in 2025

      June 27, 2025

      Not sure where to go with AI? Here’s your roadmap.

      June 27, 2025

      This week in AI dev tools: A2A donated to Linux Foundation, OpenAI adds Deep Research to API, and more (June 27, 2025)

      June 27, 2025

      Microsoft’s Copilot+ has been here over a year and I still don’t care about it — but I do wish I had one of its features

      June 29, 2025

      SteelSeries’ latest wireless mouse is cheap and colorful — but is this the one to spend your money on?

      June 29, 2025

      DistroWatch Weekly, Issue 1128

      June 29, 2025

      Your Slack app is getting a big upgrade – here’s how to try the new AI features

      June 29, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      How Code Feedback MCP Enhances AI-Generated Code Quality

      June 28, 2025
      Recent

      How Code Feedback MCP Enhances AI-Generated Code Quality

      June 28, 2025

      PRSS Site Creator – Create Blogs and Websites from Your Desktop

      June 28, 2025

      Say hello to ECMAScript 2025

      June 27, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft’s Copilot+ has been here over a year and I still don’t care about it — but I do wish I had one of its features

      June 29, 2025
      Recent

      Microsoft’s Copilot+ has been here over a year and I still don’t care about it — but I do wish I had one of its features

      June 29, 2025

      SteelSeries’ latest wireless mouse is cheap and colorful — but is this the one to spend your money on?

      June 29, 2025

      Microsoft confirms Windows 11 25H2, might make Windows more stable

      June 29, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Databases»People Who Ship: Building Centralized AI Tooling

    People Who Ship: Building Centralized AI Tooling

    May 13, 2025

    Welcome to People Who Ship! In this new video and blog series, we’ll be bringing you behind-the-scenes stories and hard-won insights from developers building and shipping production-grade AI applications using MongoDB.

    In each month’s episode, your host—myself, Senior AI Developer Advocate at MongoDB—will chat with developers from both inside and outside MongoDB about their projects, tools, and lessons learned along the way. Are you a developer? Great! This is the place for you; People Who Ship is by developers, for developers. And if you’re not (yet) a developer, that’s great too! Stick around to learn how your favorite applications are built.

    In this episode, John Ziegler, Engineering Lead on MongoDB’s internal generative AI (Gen AI) tooling team, shares technical decisions made and practical lessons learned while developing a centralized infrastructure called Central RAG (RAG = Retrieval Augmented Generation), which enables teams at MongoDB to rapidly build RAG-based chatbots and copilots for diverse use cases.

    John’s top three insights

    During our conversation, John shared a number of insights learned during the Central RAG project. Here are the top three:

    1. Enforce access controls across all operations

    Maintaining data sensitivity and privacy is a key requirement when building enterprise-grade AI applications. This is especially important when curating data sources and building centralized infrastructure that teams and applications across the organization can use. In the context of Central RAG, for example, users should only be able to select or link data sources that they have access to, as knowledge sources for their LLM applications. Even at query time, the LLM should only pull information that the querying user has access to, as context to answer the user’s query. Access controls are typically enforced by an authentication service using access control lists (ACLs) that define the relationships between users and resources.

    In Central RAG, this is managed by Credal’s permissions service. You can check out this article that shows you how to build an authentication layer using Credal’s permissions service, and other tools like OpenFGA.

    2. Anchor your evaluations in the problem you are trying to solve

    Evaluation is a critical aspect of shipping software, including LLM applications. It is not a one-and-done process—each time you change any component of the system, you need to ensure that it does not adversely impact the system’s performance. The evaluation metrics depend on your application’s specific use cases.

    For Central RAG, which aims to help teams securely access relevant and up-to-date data sources for building LLM applications, the team incorporates the following checks in the form of integration and end-to-end tests in their CI/CD pipeline:

    • Ensure access controls are enforced when adding data sources.

    • Ensure access controls are enforced when retrieving information from data sources.

    • Ensure that data retention policies are respected, so that removed data sources are no longer retrieved or referenced downstream.

    • LLM-as-a-judge to evaluate response quality across various use cases with a curated dataset of question-answer pairs.

    If you would like to learn more about evaluating LLM applications, we have a detailed tutorial with code.

    3. Educate your users on what’s possible and what’s not

    User education is critical yet often overlooked when deploying software. This is especially true for this new generation of AI applications, where explaining best practices and setting clear expectations can prevent data security issues and user frustration.

    For Central RAG, teams must review the acceptable use policies, legal guidelines, and documentation on available data sources and appropriate use cases before gaining access to the platform. These materials also highlight scenarios to avoid, such as connecting sensitive data sources, and provide guidance on prompting best practices to ensure users can effectively leverage the platform within its intended boundaries.

    John’s AI tool recommendations

    The backbone of Central RAG is a tool called Credal. Credal provides a platform for teams to quickly create AI applications on top of their data. As maintainers of Central RAG, Credal allows John’s team to create a curated list of data sources for teams to choose from and manage applications created by different teams.

    Teams can choose from the curated list or connect custom data sources via connectors, select from an exhaustive list of large language models (LLMs), configure system prompts, and deploy their applications to platforms like Slack, etc., directly from the Credal UI or via their API.

    Surprising and delighting users

    Overall, John describes his team’s goal with Central RAG as “making it stunningly easy for teams to build RAG applications that surprise and delight people.” We see several organizations adopting this central RAG model to both democratize the development of AI applications and to reduce the time to impact of their teams.

    If you are working on similar problems and want to learn about how MongoDB can help, submit a request to speak with one of our specialists. If you would like to explore on your own, check out our self-paced AI Learning Hub and our gen AI examples GitHub repository.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleIntroducing Automated Risk Analysis in Relational Migrator
    Next Article New VMware Tools Vulnerability Allows Attackers to Tamper with Virtual Machines, Broadcom Issues Urgent Patch

    Related Posts

    Security

    Synology ABM Flaw (CVE-2025-4679) Leaks Global Client Secret, Exposing ALL Microsoft 365 Tenants

    June 29, 2025
    Security

    D-Link DIR-816 Router Alert: 6 Critical Flaws (CVSS 9.8) Allow Remote Code Execution, NO PATCHES

    June 29, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Importance of Performance Adaptation in Frontend Development

    Development

    CVE-2025-49130 – Laravel Translation Manager Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Meta wants AI to automate every step of the ad production process – report

    News & Updates

    Build a FinOps agent using Amazon Bedrock with multi-agent capability and Amazon Nova as the foundation model

    Machine Learning

    Highlights

    CVE-2025-40912 – CryptX for Perl Malformed Unicode Injection Vulnerability

    June 11, 2025

    CVE ID : CVE-2025-40912

    Published : June 11, 2025, 6:15 p.m. | 2 hours, 44 minutes ago

    Description : CryptX for Perl before version 0.065 contains a dependency that may be susceptible to malformed unicode.

    CryptX embeds the tomcrypt library. The versions of that library in CryptX before 0.065 may be susceptible to CVE-2019-17362.

    Severity: 9.8 | CRITICAL

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-47292 – Cap Collectif Remote Code Execution Vulnerability

    May 14, 2025

    If FBC: Firebreak isn’t doing it for you, you should play this Remedy game that never truly had enough credit

    June 18, 2025

    What is Individual Therapy and How Does It Work?

    April 23, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.