Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How To Prevent WordPress SQL Injection Attacks

      June 15, 2025

      This week in AI dev tools: Apple’s Foundations Model framework, Mistral’s first reasoning model, and more (June 13, 2025)

      June 13, 2025

      Open Talent platforms emerging to match skilled workers to needs, study finds

      June 13, 2025

      Java never goes out of style: Celebrating 30 years of the language

      June 12, 2025

      DistroWatch Weekly, Issue 1126

      June 15, 2025

      It’s the year of Linux… at least for Denmark — here’s why the country’s government is dumping Windows and Office 365

      June 15, 2025

      Grounded 2’s best feature is happening because Obsidian left the Xbox One behind

      June 15, 2025

      6 registry tweaks every tech-savvy user must apply on Windows 11

      June 14, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Right Invoicing App for iPhone: InvoiceTemple

      June 14, 2025
      Recent

      Right Invoicing App for iPhone: InvoiceTemple

      June 14, 2025

      Tunnel Run game in 170 lines of pure JS

      June 14, 2025

      Integrating Drupal with Salesforce SSO via SAML and Dynamic User Sync

      June 14, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has a new tool to get you off Windows 10 and onto a Windows 11 PC

      June 15, 2025
      Recent

      Microsoft has a new tool to get you off Windows 10 and onto a Windows 11 PC

      June 15, 2025

      DistroWatch Weekly, Issue 1126

      June 15, 2025

      Find ASCII Emoji Easily with this GNOME Shell Applet

      June 15, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API

    A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API

    May 1, 2025

    In this tutorial, we will learn how to harness the power of Dappier AI, a suite of real-time search and recommendation tools, to enhance our conversational applications. By combining Dappier’s cutting-edge RealTimeSearchTool with its AIRecommendationTool, we can query the latest information from across the web and surface personalized article suggestions from custom data models. We guide you step-by-step through setting up our Google Colab environment, installing dependencies, securely loading API keys, and initializing each Dappier module. We will then integrate these tools with an OpenAI chat model (e.g., gpt-3.5-turbo), construct a composable prompt chain, and execute end-to-end queries, all within nine concise notebook cells. Whether we need up-to-the-minute news retrieval or AI-driven content curation, this tutorial provides a flexible framework for building intelligent, data-driven chat experiences.

    Copy CodeCopiedUse a different Browser
    !pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai

    We bootstrap our Colab environment by installing the core LangChain libraries, both the Dappier extensions and the community integrations, alongside the official OpenAI client. With these packages in place, we will have seamless access to Dappier’s real-time search and recommendation tools, the latest LangChain runtimes, and the OpenAI API, all in one environment.

    Copy CodeCopiedUse a different Browser
    import os
    from getpass import getpass
    
    
    os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")
    
    
    os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")

    We securely capture our Dappier and OpenAI API credentials at runtime, thereby avoiding the hard-coding of sensitive keys in our notebook. By using getpass, the prompts ensure our inputs remain hidden, and setting them as environment variables makes them available to all subsequent cells without exposing them in logs.

    Copy CodeCopiedUse a different Browser
    from langchain_dappier import DappierRealTimeSearchTool
    
    
    search_tool = DappierRealTimeSearchTool()
    print("Real-time search tool ready:", search_tool)

    We import Dappier’s real‐time search module and create an instance of the DappierRealTimeSearchTool, enabling our notebook to execute live web queries. The print statement confirms that the tool has been initialized successfully and is ready to handle search requests.

    Copy CodeCopiedUse a different Browser
    from langchain_dappier import DappierAIRecommendationTool
    
    
    recommendation_tool = DappierAIRecommendationTool(
        data_model_id="dm_01j0pb465keqmatq9k83dthx34",
        similarity_top_k=3,
        ref="sportsnaut.com",
        num_articles_ref=2,
        search_algorithm="most_recent",
    )
    print("Recommendation tool ready:", recommendation_tool)

    We set up Dappier’s AI-powered recommendation engine by specifying our custom data model, the number of similar articles to retrieve, and the source domain for context. The DappierAIRecommendationTool instance will now use the “most_recent” algorithm to pull in the top-k relevant articles (here, two) from our specified reference, ready for query-driven content suggestions.

    Copy CodeCopiedUse a different Browser
    from langchain.chat_models import init_chat_model
    
    
    llm = init_chat_model(
        model="gpt-3.5-turbo",
        model_provider="openai",
        temperature=0,
    )
    llm_with_tools = llm.bind_tools([search_tool])
    print("✅ llm_with_tools ready")

    We create an OpenAI chat model instance using gpt-3.5-turbo with a temperature of 0 to ensure consistent responses, and then bind the previously initialized search tool so that the LLM can invoke real-time searches. The final print statement confirms that our LLM is ready to call Dappier’s tools within our conversational flows.

    Copy CodeCopiedUse a different Browser
    import datetime
    from langchain_core.prompts import ChatPromptTemplate
    
    
    today = datetime.datetime.today().strftime("%Y-%m-%d")
    prompt = ChatPromptTemplate([
        ("system", f"we are a helpful assistant. Today is {today}."),
        ("human", "{user_input}"),
        ("placeholder", "{messages}"),
    ])
    
    
    llm_chain = prompt | llm_with_tools
    print("✅ llm_chain built")

    We construct the conversational “chain” by first building a ChatPromptTemplate that injects the current date into a system prompt and defines slots for user input and prior messages. By piping the template (|) into our llm_with_tools, we create an llm_chain that automatically formats prompts, invokes the LLM (with real-time search capability), and handles responses in a seamless workflow. The final print confirms the chain is ready to drive end-to-end interactions.

    Copy CodeCopiedUse a different Browser
    from langchain_core.runnables import RunnableConfig, chain
    
    
    @chain
    def tool_chain(user_input: str, config: RunnableConfig):
        ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
        tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
        return llm_chain.invoke(
            {"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
            config=config
        )
    
    
    print("✅ tool_chain defined")

    We define an end-to-end tool_chain that first sends our prompt to the LLM (capturing any requested tool calls), then executes those calls via search_tool.batch, and finally feeds both the AI’s initial message and the tool outputs back into the LLM for a cohesive response. The @chain decorator transforms this into a single, runnable pipeline, allowing us to simply call tool_chain.invoke(…) to handle both thinking and searching in a single step.

    Copy CodeCopiedUse a different Browser
    res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
    print("🔍 Search:", res)

    We demonstrate a direct query to Dappier’s real-time search engine, asking “What happened at the last WrestleMania,” and immediately print the structured result. It shows how easily we can leverage search_tool.invoke to fetch up-to-the-moment information and inspect the raw response in our notebook.

    Copy CodeCopiedUse a different Browser
    rec = recommendation_tool.invoke({"query": "latest sports news"})
    print("📄 Recommendation:", rec)
    
    
    out = tool_chain.invoke("Who won the last Nobel Prize?")
    print("🤖 Chain output:", out)

    Finally, we showcase both our recommendation and full-chain workflows in action. First, it calls recommendation_tool.invoke with “latest sports news” to fetch relevant articles from our custom data model, then prints those suggestions. Next, it runs the tool_chain.invoke(“Who won the last Nobel Prize?”) to perform an end-to-end LLM query combined with real-time search, printing the AI’s synthesized answer, and integrating live data.

    In conclusion, we now have a robust baseline for embedding Dappier AI capabilities into any conversational workflow. We’ve seen how effortlessly Dappier’s real-time search empowers our LLM to access fresh facts, while the recommendation tool enables us to deliver contextually relevant insights from proprietary data sources. From here, we can customize search parameters (e.g., refining query filters) or fine-tune recommendation settings (e.g., adjusting similarity thresholds and reference domains) to suit our domain.


    Check out the Dappier Platform and Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post A Step-by-Step Coding Guide to Integrate Dappier AI’s Real-Time Search and Recommendation Tools with OpenAI’s Chat API appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleWindows 11 should Sherlock this audio app — but I hope it doesn’t
    Next Article Xbox’s mobile aspirations may finally come to fruition as a U.S. judge just banned Apple from restricting developers’ payment systems on iOS

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 15, 2025
    Machine Learning

    Microsoft AI Introduces Code Researcher: A Deep Research Agent for Large Systems Code and Commit History

    June 15, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CISO’s Guide To Web Privacy Validation And Why It’s Important

    Development

    Findomain — All Information of Domain

    Learning Resources

    CVE-2025-4668 – Apache HTTP Server Deserialization Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4227 – Palo Alto Networks GlobalProtect Unencrypted Packet Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    ManageEngine Exchange Reporter Plus Vulnerability Allows Remote Code Execution

    June 10, 2025

    ManageEngine Exchange Reporter Plus Vulnerability Allows Remote Code Execution

    A severe security vulnerability has been identified in ManageEngine Exchange Reporter Plus that could allow attackers to execute arbitrary commands on target servers.
    Designated as CVE-2025-3835, this …
    Read more

    Published Date:
    Jun 10, 2025 (43 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-3835

    CVE-2025-4922 – Nomad Prefix-Based ACL Policy Vulnerability (Insufficient ACL Resolution)

    June 11, 2025

    CVE-2025-5062 – WooCommerce WordPress PostMessage-Based Cross-Site Scripting Vulnerability

    May 22, 2025

    Error’d: Hot Dog

    April 18, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.