Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      React.js for SaaS Platforms: How Top Development Teams Help Startups Launch Faster

      August 3, 2025

      Upwork Freelancers vs Dedicated React.js Teams: What’s Better for Your Project in 2025?

      August 1, 2025

      Is Agile dead in the age of AI?

      August 1, 2025

      Top 15 Enterprise Use Cases That Justify Hiring Node.js Developers in 2025

      July 31, 2025

      Unplugging these 7 common household devices helped reduce my electricity bills

      August 3, 2025

      DistroWatch Weekly, Issue 1133

      August 3, 2025

      Anthropic beats OpenAI as the top LLM provider for business – and it’s not even close

      August 2, 2025

      I bought Samsung’s Galaxy Watch Ultra 2025 – here’s why I have buyer’s remorse

      August 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The details of TC39’s last meeting

      August 3, 2025
      Recent

      The details of TC39’s last meeting

      August 3, 2025

      Enhancing Laravel Queries with Reusable Scope Patterns

      August 1, 2025

      Everything We Know About Livewire 4

      August 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      DistroWatch Weekly, Issue 1133

      August 3, 2025
      Recent

      DistroWatch Weekly, Issue 1133

      August 3, 2025

      Newelle, a ‘Virtual Assistant’ for GNOME, Hits Version 1.0

      August 3, 2025

      Bustle – visualize D-Bus activity

      August 3, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    May 2, 2025

    In this tutorial, we will explore how to leverage the capabilities of Fireworks AI for building intelligent, tool-enabled agents with LangChain. Starting from installing the langchain-fireworks package and configuring your Fireworks API key, we’ll set up a ChatFireworks LLM instance, powered by the high-performance llama-v3-70b-instruct model, and integrate it with LangChain’s agent framework. Along the way, we’ll define custom tools such as a URL fetcher for scraping webpage text and an SQL generator for converting plain-language requirements into executable BigQuery queries. By the end, we’ll have a fully functional REACT-style agent that can dynamically invoke tools, maintain conversational memory, and deliver sophisticated, end-to-end workflows powered by Fireworks AI.

    Copy CodeCopiedUse a different Browser
    !pip install -qU langchain langchain-fireworks requests beautifulsoup4

    We bootstrap the environment by installing all the required Python packages, including langchain, its Fireworks integration, and common utilities such as requests and beautifulsoup4. This ensures that we have the latest versions of all necessary components to run the rest of the notebook seamlessly.

    Copy CodeCopiedUse a different Browser
    import requests
    from bs4 import BeautifulSoup
    from langchain.tools import BaseTool
    from langchain.agents import initialize_agent, AgentType
    from langchain_fireworks import ChatFireworks
    from langchain import LLMChain, PromptTemplate
    from langchain.memory import ConversationBufferMemory
    import getpass
    import os

    We bring in all the necessary imports: HTTP clients (requests, BeautifulSoup), the LangChain agent framework (BaseTool, initialize_agent, AgentType), the Fireworks-powered LLM (ChatFireworks), plus prompt and memory utilities (LLMChain, PromptTemplate, ConversationBufferMemory), as well as standard modules for secure input and environment management.

    Copy CodeCopiedUse a different Browser
    os.environ["FIREWORKS_API_KEY"] = getpass("🚀 Enter your Fireworks API key: ")

    Now, it prompts you to enter your Fireworks API key via getpass securely and sets it in the environment. This step ensures that subsequent calls to the ChatFireworks model are authenticated without exposing your key in plain text.

    Copy CodeCopiedUse a different Browser
    llm = ChatFireworks(
        model="accounts/fireworks/models/llama-v3-70b-instruct",
        temperature=0.6,
        max_tokens=1024,
        stop=["nn"]
    )
    

    We demonstrate how to instantiate a ChatFireworks LLM configured for instruction-following, utilizing llama-v3-70b-instruct, a moderate temperature, and a token limit, allowing you to immediately start issuing prompts to the model.

    Copy CodeCopiedUse a different Browser
    prompt = [
        {"role":"system","content":"You are an expert data-scientist assistant."},
        {"role":"user","content":"Analyze the sentiment of this review:nn"
                               ""The new movie was breathtaking, but a bit too long.""}
    ]
    resp = llm.invoke(prompt)
    print("Sentiment Analysis →", resp.content)

    Next, we demonstrate a simple sentiment-analysis example: it builds a structured prompt as a list of role-annotated messages, invokes llm.invoke(), and prints out the model’s sentiment interpretation of the provided movie review.

    Copy CodeCopiedUse a different Browser
    template = """
    You are a data-science assistant. Keep track of the convo:
    
    
    {history}
    User: {input}
    Assistant:"""
    
    
    prompt = PromptTemplate(input_variables=["history","input"], template=template)
    memory = ConversationBufferMemory(memory_key="history")
    
    
    chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
    
    
    print(chain.run(input="Hey, what can you do?"))
    print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
    print(chain.run(input="Based on that, would you recommend the service?"))

    We illustrate how to add conversational memory, which involves defining a prompt template that incorporates past exchanges, setting up a ConversationBufferMemory, and chaining everything together with LLMChain. Running a few sample inputs shows how the model retains context across turns.

    Copy CodeCopiedUse a different Browser
    class FetchURLTool(BaseTool):
        name: str = "fetch_url"
        description: str = "Fetch the main text (first 500 chars) from a webpage."
    
    
        def _run(self, url: str) -> str:
            resp = requests.get(url, timeout=10)
            doc = BeautifulSoup(resp.text, "html.parser")
            paras = [p.get_text() for p in doc.find_all("p")][:5]
            return "nn".join(paras)
    
    
        async def _arun(self, url: str) -> str:
            raise NotImplementedError

    We define a custom FetchURLTool by subclassing BaseTool. This tool fetches the first few paragraphs from any URL using requests and BeautifulSoup, making it easy for your agent to retrieve live web content.

    Copy CodeCopiedUse a different Browser
    class GenerateSQLTool(BaseTool):
        name: str = "generate_sql"
        description: str = "Generate a BigQuery SQL query (with comments) from a text description."
    
    
        def _run(self, text: str) -> str:
            prompt = f"""
    -- Requirement:
    -- {text}
    
    
    -- Write a BigQuery SQL query (with comments) to satisfy the above.
    """
            return llm.invoke([{"role":"user","content":prompt}]).content
    
    
        async def _arun(self, text: str) -> str:
            raise NotImplementedError
    
    
    tools = [FetchURLTool(), GenerateSQLTool()]
    
    
    agent = initialize_agent(
        tools,
        llm,
        agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
        verbose=True
    )
    
    
    result = agent.run(
        "Fetch https://en.wikipedia.org/wiki/ChatGPT "
        "and then generate a BigQuery SQL query that counts how many times "
        "the word 'model' appears in the page text."
    )
    
    
    print("n🔍 Generated SQL:n", result)

    Finally, GenerateSQLTool is another BaseTool subclass that wraps the LLM to transform plain-English requirements into commented BigQuery SQL. It then wires both tools into a REACT-style agent via initialize_agent, runs a combined fetch-and-generate example, and prints out the resulting SQL query.

    In conclusion, we have integrated Fireworks AI with LangChain’s modular tooling and agent ecosystem, unlocking a versatile platform for building AI applications that extend beyond simple text generation. We can extend the agent’s capabilities by adding domain-specific tools, customizing prompts, and fine-tuning memory behavior, all while leveraging Fireworks’ scalable inference engine. As next steps, explore advanced features such as function-calling, chaining multiple agents, or incorporating vector-based retrieval to craft even more dynamic and context-aware assistants.


    Check out the Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleXiaomi introduced MiMo-7B: A Compact Language Model that Outperforms Larger Models in Mathematical and Code Reasoning through Rigorous Pre-Training and Reinforcement Learning
    Next Article Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 3, 2025
    Machine Learning

    Google AI Releases MLE-STAR: A State-of-the-Art Machine Learning Engineering Agent Capable of Automating Various AI Tasks

    August 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    WhatsApp Adds AI-Powered Message Summaries for Faster Chat Previews

    Development

    I ran with Samsung’s Galaxy Watch 8 Classic, and it both humbled and motivated me

    News & Updates

    LockedIn AI Launches 100% Hidden Desktop App to Crack Any Interview

    Web Development

    CVE-2025-51657 – SemCms SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    News & Updates

    A rare opportunity is here to get an HP gaming laptop for only $500 — NVIDIA RTX graphics and a 144Hz display at a bargain price

    July 16, 2025

    Opportunities to get a proper gaming laptop for just $500 don’t come around all that…

    CVE-2025-3806 – Dazhouda Lcms Cross Site Scripting Vulnerability

    April 20, 2025

    Build AWS architecture diagrams using Amazon Q CLI and MCP

    June 30, 2025

    Intelligent Automation in the Healthcare Sector with n8n, OpenAI, and Pinecone

    July 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.