Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Representative Line: Brace Yourself

      September 18, 2025

      Beyond the Pilot: A Playbook for Enterprise-Scale Agentic AI

      September 18, 2025

      GitHub launches MCP Registry to provide central location for trusted servers

      September 18, 2025

      MongoDB brings Search and Vector Search to self-managed versions of database

      September 18, 2025

      Distribution Release: Security Onion 2.4.180

      September 18, 2025

      Distribution Release: Omarchy 3.0.1

      September 17, 2025

      Distribution Release: Mauna Linux 25

      September 16, 2025

      Distribution Release: SparkyLinux 2025.09

      September 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025
      Recent

      AI Momentum and Perficient’s Inclusion in Analyst Reports – Highlights From 2025 So Far

      September 18, 2025

      Shopping Portal using Python Django & MySQL

      September 17, 2025

      Perficient Earns Adobe’s Real-time CDP Specialization

      September 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025
      Recent

      Valve Survey Reveals Slight Retreat in Steam-on-Linux Share

      September 18, 2025

      Review: Elecrow’s All-in-one Starter Kit for Pico 2

      September 18, 2025

      FOSS Weekly #25.38: GNOME 49 Release, KDE Drama, sudo vs sudo-rs, Local AI on Android and More Linux Stuff

      September 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    May 2, 2025

    In this tutorial, we will explore how to leverage the capabilities of Fireworks AI for building intelligent, tool-enabled agents with LangChain. Starting from installing the langchain-fireworks package and configuring your Fireworks API key, we’ll set up a ChatFireworks LLM instance, powered by the high-performance llama-v3-70b-instruct model, and integrate it with LangChain’s agent framework. Along the way, we’ll define custom tools such as a URL fetcher for scraping webpage text and an SQL generator for converting plain-language requirements into executable BigQuery queries. By the end, we’ll have a fully functional REACT-style agent that can dynamically invoke tools, maintain conversational memory, and deliver sophisticated, end-to-end workflows powered by Fireworks AI.

    Copy CodeCopiedUse a different Browser
    !pip install -qU langchain langchain-fireworks requests beautifulsoup4

    We bootstrap the environment by installing all the required Python packages, including langchain, its Fireworks integration, and common utilities such as requests and beautifulsoup4. This ensures that we have the latest versions of all necessary components to run the rest of the notebook seamlessly.

    Copy CodeCopiedUse a different Browser
    import requests
    from bs4 import BeautifulSoup
    from langchain.tools import BaseTool
    from langchain.agents import initialize_agent, AgentType
    from langchain_fireworks import ChatFireworks
    from langchain import LLMChain, PromptTemplate
    from langchain.memory import ConversationBufferMemory
    import getpass
    import os

    We bring in all the necessary imports: HTTP clients (requests, BeautifulSoup), the LangChain agent framework (BaseTool, initialize_agent, AgentType), the Fireworks-powered LLM (ChatFireworks), plus prompt and memory utilities (LLMChain, PromptTemplate, ConversationBufferMemory), as well as standard modules for secure input and environment management.

    Copy CodeCopiedUse a different Browser
    os.environ["FIREWORKS_API_KEY"] = getpass("<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f680.png" alt="🚀" class="wp-smiley" /> Enter your Fireworks API key: ")

    Now, it prompts you to enter your Fireworks API key via getpass securely and sets it in the environment. This step ensures that subsequent calls to the ChatFireworks model are authenticated without exposing your key in plain text.

    Copy CodeCopiedUse a different Browser
    llm = ChatFireworks(
        model="accounts/fireworks/models/llama-v3-70b-instruct",
        temperature=0.6,
        max_tokens=1024,
        stop=["nn"]
    )
    

    We demonstrate how to instantiate a ChatFireworks LLM configured for instruction-following, utilizing llama-v3-70b-instruct, a moderate temperature, and a token limit, allowing you to immediately start issuing prompts to the model.

    Copy CodeCopiedUse a different Browser
    prompt = [
        {"role":"system","content":"You are an expert data-scientist assistant."},
        {"role":"user","content":"Analyze the sentiment of this review:nn"
                               ""The new movie was breathtaking, but a bit too long.""}
    ]
    resp = llm.invoke(prompt)
    print("Sentiment Analysis →", resp.content)

    Next, we demonstrate a simple sentiment-analysis example: it builds a structured prompt as a list of role-annotated messages, invokes llm.invoke(), and prints out the model’s sentiment interpretation of the provided movie review.

    Copy CodeCopiedUse a different Browser
    template = """
    You are a data-science assistant. Keep track of the convo:
    
    
    {history}
    User: {input}
    Assistant:"""
    
    
    prompt = PromptTemplate(input_variables=["history","input"], template=template)
    memory = ConversationBufferMemory(memory_key="history")
    
    
    chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
    
    
    print(chain.run(input="Hey, what can you do?"))
    print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
    print(chain.run(input="Based on that, would you recommend the service?"))

    We illustrate how to add conversational memory, which involves defining a prompt template that incorporates past exchanges, setting up a ConversationBufferMemory, and chaining everything together with LLMChain. Running a few sample inputs shows how the model retains context across turns.

    Copy CodeCopiedUse a different Browser
    class FetchURLTool(BaseTool):
        name: str = "fetch_url"
        description: str = "Fetch the main text (first 500 chars) from a webpage."
    
    
        def _run(self, url: str) -> str:
            resp = requests.get(url, timeout=10)
            doc = BeautifulSoup(resp.text, "html.parser")
            paras = [p.get_text() for p in doc.find_all("p")][:5]
            return "nn".join(paras)
    
    
        async def _arun(self, url: str) -> str:
            raise NotImplementedError

    We define a custom FetchURLTool by subclassing BaseTool. This tool fetches the first few paragraphs from any URL using requests and BeautifulSoup, making it easy for your agent to retrieve live web content.

    Copy CodeCopiedUse a different Browser
    class GenerateSQLTool(BaseTool):
        name: str = "generate_sql"
        description: str = "Generate a BigQuery SQL query (with comments) from a text description."
    
    
        def _run(self, text: str) -> str:
            prompt = f"""
    -- Requirement:
    -- {text}
    
    
    -- Write a BigQuery SQL query (with comments) to satisfy the above.
    """
            return llm.invoke([{"role":"user","content":prompt}]).content
    
    
        async def _arun(self, text: str) -> str:
            raise NotImplementedError
    
    
    tools = [FetchURLTool(), GenerateSQLTool()]
    
    
    agent = initialize_agent(
        tools,
        llm,
        agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
        verbose=True
    )
    
    
    result = agent.run(
        "Fetch https://en.wikipedia.org/wiki/ChatGPT "
        "and then generate a BigQuery SQL query that counts how many times "
        "the word 'model' appears in the page text."
    )
    
    
    print("n<img src="https://s.w.org/images/core/emoji/15.1.0/72x72/1f50d.png" alt="🔍" class="wp-smiley" /> Generated SQL:n", result)

    Finally, GenerateSQLTool is another BaseTool subclass that wraps the LLM to transform plain-English requirements into commented BigQuery SQL. It then wires both tools into a REACT-style agent via initialize_agent, runs a combined fetch-and-generate example, and prints out the resulting SQL query.

    In conclusion, we have integrated Fireworks AI with LangChain’s modular tooling and agent ecosystem, unlocking a versatile platform for building AI applications that extend beyond simple text generation. We can extend the agent’s capabilities by adding domain-specific tools, customizing prompts, and fine-tuning memory behavior, all while leveraging Fireworks’ scalable inference engine. As next steps, explore advanced features such as function-calling, chaining multiple agents, or incorporating vector-based retrieval to craft even more dynamic and context-aware assistants.


    Check out the Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleXiaomi introduced MiMo-7B: A Compact Language Model that Outperforms Larger Models in Mathematical and Code Reasoning through Rigorous Pre-Training and Reinforcement Learning
    Next Article Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    ralphjsmit/laravel-helpers

    Development

    CVE-2025-54310 – qBittorrent Local File Disclosure

    Common Vulnerabilities and Exposures (CVEs)

    A Guide To Evaluating Your Organizational Product Portfolio

    Web Development

    New AI-Powered Outlook Is Coming to Microsoft 365 Education Accounts in 2026

    Operating Systems

    Highlights

    How ChatGPT could replace the internet as we know it

    May 29, 2025

    Why is no one talking about this? OpenAI is changing the way we do everything…

    ‘Earth Lamia’ Exploits Known SQL, RCE Bugs Across Asia

    May 30, 2025

    CVE-2025-53687 – Apache HTTP Server Unvalidated User Input

    July 9, 2025

    CVE-2025-4829 – TOTOLINK A702R/A3002R/A3002RU HTTP POST Request Handler Buffer Overflow Vulnerability

    May 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.