Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      How AI further empowers value stream management

      June 27, 2025

      12 Top ReactJS Development Companies in 2025

      June 27, 2025

      Not sure where to go with AI? Here’s your roadmap.

      June 27, 2025

      This week in AI dev tools: A2A donated to Linux Foundation, OpenAI adds Deep Research to API, and more (June 27, 2025)

      June 27, 2025

      The next big HDMI leap has arrived – here’s how these 16K cables will shake things up

      June 27, 2025

      Here’s how you can still trade in any phone at Verizon to get an iPhone, iPad, and Apple Watch free

      June 27, 2025

      Anthropic has a plan to combat AI-triggered job losses predicted by its CEO

      June 27, 2025

      Forget Google and Microsoft: OpenAI may be building the ultimate work suite of apps and services

      June 27, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Say hello to ECMAScript 2025

      June 27, 2025
      Recent

      Say hello to ECMAScript 2025

      June 27, 2025

      Ecma International approves ECMAScript 2025: What’s new?

      June 27, 2025

      Building Together: PRFT Colleagues Volunteer with Atlanta Habitat for Humanity

      June 27, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Fix Elden Ring Nightreign Connection Errors And Server Login Failure PC

      June 27, 2025
      Recent

      Fix Elden Ring Nightreign Connection Errors And Server Login Failure PC

      June 27, 2025

      Fix Now EAC Error 20006 in Elden Ring: Nightreign [6 Easy Tricks]

      June 27, 2025

      Fix Now Elden Ring Nightreign EAC Error 30005 (CreateFile Failed)

      June 27, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing an LLM Agent with Tool Access Using MCP-Use

    Implementing an LLM Agent with Tool Access Using MCP-Use

    May 13, 2025

    MCP-Use is an open-source library that lets you connect any LLM to any MCP server, giving your agents tool access like web browsing, file operations, and more — all without relying on closed-source clients. In this tutorial, we’ll use langchain-groq and MCP-Use’s built-in conversation memory to build a simple chatbot that can interact with tools via MCP. 

    Step 1: Setting Up the Environment

    Installing uv package manager

    We will first set up our environment and start with installing the uv package manager. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    curl -LsSf https://astral.sh/uv/install.sh | sh 

    For Windows (PowerShell):

    Copy CodeCopiedUse a different Browser
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

    Creating a new directory and activating a virtual environment

    We will then create a new project directory and initialize it with uv

    Copy CodeCopiedUse a different Browser
    uv init mcp-use-demo
    cd mcp-use-demo

    We can now create and activate a virtual environment. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    uv venv
    source .venv/bin/activate

    For Windows:

    Copy CodeCopiedUse a different Browser
    uv venv
    .venvScriptsactivate

    Installing Python dependencies

    We will now install the required dependencies

    Copy CodeCopiedUse a different Browser
    uv add mcp-use langchain-groq python-dotenv

    Step 2: Setting Up the Environment Variables

    Groq API Key

    To use Groq’s LLMs:

    1. Visit Groq Console and generate an API key.
    2. Create a .env file in your project directory and add the following line:
    Copy CodeCopiedUse a different Browser
    GROQ_API_KEY=<YOUR_API_KEY>

     Replace <YOUR_API_KEY> with the key you just generated.

    Brave Search API Key

    This tutorial uses the Brave Search MCP Server.

    1. Get your Brave Search API key from: Brave Search API
    2. Create a file named mcp.json in the project root with the following content:
    Copy CodeCopiedUse a different Browser
    {
      "mcpServers": {
        "brave-search": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-brave-search"
          ],
          "env": {
            "BRAVE_API_KEY": "<YOUR_BRAVE_SEARCH_API>"
          }
        }
      }
    }

    Replace <YOUR_BRAVE_SEARCH_API> with your actual Brave API key.

    Node JS

    Some MCP servers (including Brave Search) require npx, which comes with Node.js.

    • Download the latest version of Node.js from nodejs.org
    • Run the installer.
    • Leave all settings as default and complete the installation

    Using other servers

    If you’d like to use a different MCP server, simply replace the contents of mcp.json with the configuration for that server.

    Step 3: Implementing the chatbot and integrating the MCP server

    Create an app.py file in the directory and add the following content:

    Importing the libraries

    Copy CodeCopiedUse a different Browser
    from dotenv import load_dotenv
    from langchain_groq import ChatGroq
    from mcp_use import MCPAgent, MCPClient
    import os
    import sys
    import warnings
    
    warnings.filterwarnings("ignore", category=ResourceWarning)

    This section loads environment variables and imports required modules for LangChain, MCP-Use, and Groq. It also suppresses ResourceWarning for cleaner output.

    Setting up the chatbot

    Copy CodeCopiedUse a different Browser
    async def run_chatbot():
        """ Running a chat using MCPAgent's built in conversation memory """
        load_dotenv()
        os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
    
        configFile = "mcp.json"
        print("Starting chatbot...")
    
        # Creating MCP client and LLM instance
        client = MCPClient.from_config_file(configFile)
        llm = ChatGroq(model="llama-3.1-8b-instant")
    
        # Creating an agent with memory enabled
        agent = MCPAgent(
            llm=llm,
            client=client,
            max_steps=15,
            memory_enabled=True,
            verbose=False
        )

    This section loads the Groq API key from the .env file and initializes the MCP client using the configuration provided in mcp.json. It then sets up the LangChain Groq LLM and creates a memory-enabled agent to handle conversations.

    Implementing the chatbot

    Copy CodeCopiedUse a different Browser
    # Add this in the run_chatbot function
        print("n-----Interactive MCP Chat----")
        print("Type 'exit' or 'quit' to end the conversation")
        print("Type 'clear' to clear conversation history")
    
        try:
            while True:
                user_input = input("nYou: ")
    
                if user_input.lower() in ["exit", "quit"]:
                    print("Ending conversation....")
                    break
               
                if user_input.lower() == "clear":
                    agent.clear_conversation_history()
                    print("Conversation history cleared....")
                    continue
               
                print("nAssistant: ", end="", flush=True)
    
                try:
                    response = await agent.run(user_input)
                    print(response)
               
                except Exception as e:
                    print(f"nError: {e}")
    
        finally:
            if client and client.sessions:
                await client.close_all_sessions()

    This section enables interactive chatting, allowing the user to input queries and receive responses from the assistant. It also supports clearing the chat history when requested. The assistant’s responses are displayed in real-time, and the code ensures that all MCP sessions are closed cleanly when the conversation ends or is interrupted.

    Running the app

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
        import asyncio
        try:
            asyncio.run(run_chatbot())
        except KeyboardInterrupt:
            print("Session interrupted. Goodbye!")
       
        finally:
            sys.stderr = open(os.devnull, "w")

    This section runs the asynchronous chatbot loop, managing continuous interaction with the user. It also handles keyboard interruptions gracefully, ensuring the program exits without errors when the user terminates the session.

    You can find the entire code here

    Step 4: Running the app

    To run the app, run the following command

    Copy CodeCopiedUse a different Browser
    uv run app.py

    This will start the app, and you can interact with the chatbot and use the server for the session

    The post Implementing an LLM Agent with Tool Access Using MCP-Use appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Deploy a Fully Integrated Firecrawl-Powered MCP Server on Claude Desktop with Smithery and VeryaX
    Next Article Securing Amazon Bedrock Agents: A guide to safeguarding against indirect prompt injections

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 27, 2025
    Machine Learning

    AWS costs estimation using Amazon Q CLI and AWS Cost Analysis MCP

    June 27, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How to Test a Website || A Total Guide of Website Penetration Testing

    Learning Resources

    Sam Altman Admits: ChatGPT’s New Personality Is “Annoying”, Fix Coming This Week

    Artificial Intelligence

    Microsoft 365 teases new 3D Office icons for Windows 11, revamp to reflect AI

    Operating Systems

    CVE-2025-43000 – Apache Struts Information Disclosure Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2024-13418 – WordPress Theme/Plugin Arbitrary File Upload Vulnerability

    May 2, 2025

    CVE ID : CVE-2024-13418

    Published : May 2, 2025, 4:15 a.m. | 3 hours, 5 minutes ago

    Description : Multiple plugins and/or themes for WordPress are vulnerable to Arbitrary File Uploads due to a missing capability check on the ajaxUploadFonts() function in various versions. This makes it possible for authenticated attackers, with Subscriber-level access and above, to upload arbitrary files that can make remote code execution possible. This issue was escalated to Envato over two months from the date of this disclosure and the issue, while partially patched, is still vulnerable.

    Severity: 8.8 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Intel’s new CEO vows to run chipmaker ‘as a startup, on day one’

    April 1, 2025

    Best Chest Doctor Near Me | Parthiv Lung Care

    May 3, 2025

    CVE-2025-6394 – Code-projects Simple Online Hotel Reservation System SQL Injection Vulnerability

    June 21, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.