Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: Functionally, a Date

      September 16, 2025

      Creating Elastic And Bounce Effects With Expressive Animator

      September 16, 2025

      Microsoft shares Insiders preview of Visual Studio 2026

      September 16, 2025

      From Data To Decisions: UX Strategies For Real-Time Dashboards

      September 13, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Can I use React Server Components (RSCs) today?

      September 16, 2025
      Recent

      Can I use React Server Components (RSCs) today?

      September 16, 2025

      Perficient Named among Notable Providers in Forrester’s Q3 2025 Commerce Services Landscape

      September 16, 2025

      Sarah McDowell Helps Clients Build a Strong AI Foundation Through Salesforce

      September 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Ran Local LLMs on My Android Phone

      September 16, 2025
      Recent

      I Ran Local LLMs on My Android Phone

      September 16, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command

      September 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API

    A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API

    May 1, 2025

    In this tutorial, we will learn how to harness the power of Dappier AI, a suite of real-time search and recommendation tools, to enhance our conversational applications. By combining Dappier’s cutting-edge RealTimeSearchTool with its AIRecommendationTool, we can query the latest information from across the web and surface personalized article suggestions from custom data models. We guide you step-by-step through setting up our Google Colab environment, installing dependencies, securely loading API keys, and initializing each Dappier module. We will then integrate these tools with an OpenAI chat model (e.g., gpt-3.5-turbo), construct a composable prompt chain, and execute end-to-end queries, all within nine concise notebook cells. Whether we need up-to-the-minute news retrieval or AI-driven content curation, this tutorial provides a flexible framework for building intelligent, data-driven chat experiences.

    Copy CodeCopiedUse a different Browser
    !pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai

    We bootstrap our Colab environment by installing the core LangChain libraries, both the Dappier extensions and the community integrations, alongside the official OpenAI client. With these packages in place, we will have seamless access to Dappier’s real-time search and recommendation tools, the latest LangChain runtimes, and the OpenAI API, all in one environment.

    Copy CodeCopiedUse a different Browser
    import os
    from getpass import getpass
    
    
    os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")
    
    
    os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")

    We securely capture our Dappier and OpenAI API credentials at runtime, thereby avoiding the hard-coding of sensitive keys in our notebook. By using getpass, the prompts ensure our inputs remain hidden, and setting them as environment variables makes them available to all subsequent cells without exposing them in logs.

    Copy CodeCopiedUse a different Browser
    from langchain_dappier import DappierRealTimeSearchTool
    
    
    search_tool = DappierRealTimeSearchTool()
    print("Real-time search tool ready:", search_tool)

    We import Dappier’s real‐time search module and create an instance of the DappierRealTimeSearchTool, enabling our notebook to execute live web queries. The print statement confirms that the tool has been initialized successfully and is ready to handle search requests.

    Copy CodeCopiedUse a different Browser
    from langchain_dappier import DappierAIRecommendationTool
    
    
    recommendation_tool = DappierAIRecommendationTool(
        data_model_id="dm_01j0pb465keqmatq9k83dthx34",
        similarity_top_k=3,
        ref="sportsnaut.com",
        num_articles_ref=2,
        search_algorithm="most_recent",
    )
    print("Recommendation tool ready:", recommendation_tool)

    We set up Dappier’s AI-powered recommendation engine by specifying our custom data model, the number of similar articles to retrieve, and the source domain for context. The DappierAIRecommendationTool instance will now use the “most_recent” algorithm to pull in the top-k relevant articles (here, two) from our specified reference, ready for query-driven content suggestions.

    Copy CodeCopiedUse a different Browser
    from langchain.chat_models import init_chat_model
    
    
    llm = init_chat_model(
        model="gpt-3.5-turbo",
        model_provider="openai",
        temperature=0,
    )
    llm_with_tools = llm.bind_tools([search_tool])
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> llm_with_tools ready")

    We create an OpenAI chat model instance using gpt-3.5-turbo with a temperature of 0 to ensure consistent responses, and then bind the previously initialized search tool so that the LLM can invoke real-time searches. The final print statement confirms that our LLM is ready to call Dappier’s tools within our conversational flows.

    Copy CodeCopiedUse a different Browser
    import datetime
    from langchain_core.prompts import ChatPromptTemplate
    
    
    today = datetime.datetime.today().strftime("%Y-%m-%d")
    prompt = ChatPromptTemplate([
        ("system", f"we are a helpful assistant. Today is {today}."),
        ("human", "{user_input}"),
        ("placeholder", "{messages}"),
    ])
    
    
    llm_chain = prompt | llm_with_tools
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> llm_chain built")

    We construct the conversational “chain” by first building a ChatPromptTemplate that injects the current date into a system prompt and defines slots for user input and prior messages. By piping the template (|) into our llm_with_tools, we create an llm_chain that automatically formats prompts, invokes the LLM (with real-time search capability), and handles responses in a seamless workflow. The final print confirms the chain is ready to drive end-to-end interactions.

    Copy CodeCopiedUse a different Browser
    from langchain_core.runnables import RunnableConfig, chain
    
    
    @chain
    def tool_chain(user_input: str, config: RunnableConfig):
        ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
        tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
        return llm_chain.invoke(
            {"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
            config=config
        )
    
    
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/2705.png" alt="✅" class="wp-smiley" /> tool_chain defined")

    We define an end-to-end tool_chain that first sends our prompt to the LLM (capturing any requested tool calls), then executes those calls via search_tool.batch, and finally feeds both the AI’s initial message and the tool outputs back into the LLM for a cohesive response. The @chain decorator transforms this into a single, runnable pipeline, allowing us to simply call tool_chain.invoke(…) to handle both thinking and searching in a single step.

    Copy CodeCopiedUse a different Browser
    res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Search:", res)

    We demonstrate a direct query to Dappier’s real-time search engine, asking “What happened at the last WrestleMania,” and immediately print the structured result. It shows how easily we can leverage search_tool.invoke to fetch up-to-the-moment information and inspect the raw response in our notebook.

    Copy CodeCopiedUse a different Browser
    rec = recommendation_tool.invoke({"query": "latest sports news"})
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f4c4.png" alt="📄" class="wp-smiley" /> Recommendation:", rec)
    
    
    out = tool_chain.invoke("Who won the last Nobel Prize?")
    print("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f916.png" alt="🤖" class="wp-smiley" /> Chain output:", out)

    Finally, we showcase both our recommendation and full-chain workflows in action. First, it calls recommendation_tool.invoke with “latest sports news” to fetch relevant articles from our custom data model, then prints those suggestions. Next, it runs the tool_chain.invoke(“Who won the last Nobel Prize?”) to perform an end-to-end LLM query combined with real-time search, printing the AI’s synthesized answer, and integrating live data.

    In conclusion, we now have a robust baseline for embedding Dappier AI capabilities into any conversational workflow. We’ve seen how effortlessly Dappier’s real-time search empowers our LLM to access fresh facts, while the recommendation tool enables us to deliver contextually relevant insights from proprietary data sources. From here, we can customize search parameters (e.g., refining query filters) or fine-tune recommendation settings (e.g., adjusting similarity thresholds and reference domains) to suit our domain.


    Check out the Dappier Platform and Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleWindows 11 should Sherlock this audio app — but I hope it doesn’t
    Next Article Xbox’s mobile aspirations may finally come to fruition as a U.S. judge just banned Apple from restricting developers’ payment systems on iOS

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How to Build Database Seed Scripts for Your Node Application

    Development

    One of my favorite sports watches from 2024 just got upgrades in all the right places

    News & Updates
    OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web

    OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web

    Machine Learning

    CVE-2025-5493 – Baison Channel Middleware SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    News & Updates

    EXCLUSIVE: Xbox’s first-party handheld has been sidelined (for now), as Microsoft doubles down on ‘Kennan’ and Windows 11 PC gaming optimization

    May 30, 2025

    Xbox’s handheld ambitions continue unabated, but the focus is shifting towards improving Windows 11 for…

    Nearshore vs Offshore Outsourcing: Choose What’s Right for You

    August 1, 2025

    Your smart home device just got a performance and security boost for free

    August 18, 2025

    CVE-2025-6823 – Code-projects Inventory Management System SQL Injection Vulnerability

    June 28, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.